IT NEWS

Apple’s search for child abuse imagery raises serious privacy questions

The Internet has been on fire since the August 4 discovery (disclosed publicly by Mathew Green) that Apple will be monitoring photos uploaded to iCloud for child sexual abuse material (CSAM). Some see this as a great move by Apple that will protect children. Others view this as a potentially dangerous slide away from privacy that may not actually protect children—and, in fact, could actually cause some children to come to harm.

How does this work?

It’s important to understand that, contrary to what it sounds like, Apple will not be rifling through all your photos on iCloud. All scanning for CSAM material will be done on the device itself, by an artificial intelligence algorithm. That system, called neuralMatch, will perform two functions.

The first is to create a hash of any photos on the device before they are uploaded to iCloud. (A “hash” is a computed value that should be a unique representation of a file, but that cannot be reversed to obtain a copy of the file itself.) This hash will be compared to a database of hashes of known CSAM materials on the device. The result is recorded cryptographically and stored on iCloud alongside the photo. If the user passes some minimum threshold of photos that match known CSAM hashes, Apple will be able to access those photos and the iCloud account will be shut down.

The second function is to protect children under 13 against sending or receiving CSAM images. Apple’s AI will attempt to detect whether images sent or received via iMessage have such content. In such cases, the child will be warned, and if they choose to view or send the content anyway, their parents will be notified. For children between 13 and 18, the same warning will be shown, but parents will apparently not be notified. This all relies on the child’s Apple ID being managed under a family account.

Why should I worry about monitoring a child’s texts?

There are a lot of potential problems with this. This can be a serious violation of a child’s privacy, and the behavior of this feature is predicated on an assumption that may not be true: That a child’s relationship with their parents is a healthy one. This is not always the case. Parents or guardians could be complicit in the creation of CSAM content. Further, an abusive parent could see a warning about a legitimate photo that was falsely identified as CSAM content, and could harm the child based on false information. (Yes, the parent would have the option to view the photo, but it’s possible a parent may choose not to. I certainly wouldn’t want such an image of my child in my head.)

Also, consider the fact that this applies to being sent an image, not just sending an image. Imagine the trouble a bully or scammer could cause by sending CSAM material, or the damage that could be done if a child of an abusive parent were sent a CSAM image and viewed it without fully understanding why it was being blocked or what the consequences would be!

And finally, as the EFF’s Eva Galperin pointed out on Twitter there is also the danger that this well intentioned functionality “is going to out a lot of queer kids to their homophobic parents”.

What’s the problem with monitoring photos uploaded to iCloud?

Although a comparison of a hash to a file has a low chance of false positives, it can definitely happen. Apple claims that there should be a one in one trillion chance of false positives, but it remains to be seen if that is true in practice.

Apple is providing a process to appeal in cases where an account is wrongly closed because of false positives. However, anyone who has been involved in reviews and appeals with Apple knows they don’t always go your way, nor are they always speedy. Sometimes they are, sometimes not. Time will tell how big a problem this is.

What about the privacy issues?

For a company that has constantly talked about protecting users’ privacy, this seems like a reversal. However, Apple has clearly put a lot of thought into this, and is emphasizing the fact that none of this happens on their servers. Apple states that all the processing happens on the device, and that it does not see the images (unless it’s determined that abuse is happening).

Further, CSAM is a big problem. I don’t think there’s anyone—other than pedophiles—who wouldn’t want to see all production of and trafficking in CSAM brought to an end. So many will praise Apple for taking this action.

This doesn’t mean there aren’t issues, though. Many view this as a first step onto a slippery slope. Blocking CSAM is a good thing, but there’s nothing to prevent the tools that Apple has built from being used for other things. For example, suppose the US government puts pressure on Apple to start detecting terrorism-related content. What exactly would that look like, if Apple decided to—or was forced to—comply? What would happen if a law-abiding person’s iCloud account was flagged as being involved in terrorist activity due to false positives on their photos? And what about tracking more prosaic crimes, such as drug use?

I could go on, as there are lots of things that governments of the world—including the US government—might want Apple to track. Although I tend to be willing to extend trust to Apple, this may not be something that is entirely within Apple’s control. They are a US company, and it’s possible for future US law to force Apple to do things their leadership wouldn’t have wanted to do.

We’ve also seen Apple bend to the desires of governments before. For example, Apple has conceded to demands from the government of China that are counter to Apple’s philosophy. Although the cynical point to this as evidence that Apple is more interested in profits from China’s large market (and they’re not entirely wrong) there’s more to it than that. Most of Apple’s manufacturing is done in China, and they’d be in a huge pile of trouble if China decided to shut down Apple’s ability to do business there. This means China has leverage they it use to make Apple bend to its wishes, at least within China.

Why is Apple doing this?

I’m sure there will be a lot of debate and speculation on this topic. Part of it is undoubtedly a desire to protect children and prevent distribution of CSAM. Part of it may be marketing.

To me, though, this all boils down to a political move. Apple has been a fantastic advocate for encryption and privacy, even going to the extreme of refusing the FBI’s demands relating to gaining access to a suspected terrorist’s iPhone.

It’s a common request from law enforcement to tech companies to give them “backdoors.” Essentially, this boils down to some kind of private access to users’ data, in theory accessible only to law enforcement agents. The problem with such backdoors is that they don’t tend to remain secret. Hackers can find them and gain access, or rogue government agents can abuse or even sell their access. There is no such thing as a secure backdoor.

Apple’s refusal to create backdoors for government access has angered many who believe that Apple is preventing law enforcement from doing their jobs. A common refrain for people trying to push for backdoors is the old standby, “but think of the children!” CSAM is frequently brought up as a reason why access to messaging, file storage, etc, is needed. This is actually a somewhat clever argument, by making it seem (falsely) like arguing against backdoors is also an argument in support of pedophiles.

By taking specific action against CSAM, Apple has effectively neutered this argument. Politicians will no longer be able to (in essence) accuse Apple of protecting pedophiles as a means of pushing for legislation to require backdoors.

Conclusion

In the end, this is something that is going to cause a lot of controversy and differences of opinion. Some are in support of Apple’s actions, while others are adamantly in opposition. Apple seems to be trying to do the right thing here, and appears to have put a lot of effort into ensuring that the way this is done is most respectful of privacy, but there are some legitimate reasons to question whether this new feature is a good idea.

Those reasons should not be conflated with support for or opposition to CSAM, which we can all agree is a very bad thing. There’s a lot of discussion that should be had on this topic, but CSAM is a very emotional subject, and we should all try to prevent that from coloring our evaluation of the potential problems with Apple’s approach.

The post Apple’s search for child abuse imagery raises serious privacy questions appeared first on Malwarebytes Labs.

Edge’s Super Duper Secure Mode benchmarked: How much speed would you trade for security?

In an attempt to make Edge more secure, the Microsoft Vulnerability Research team has started to experiment with disabling Just-In-Time (JIT) compilation in the browser’s V8 JavaScript engine, to create what it’s calling Super Duper Secure Mode.

The reasoning behind this experiment sounds valid. A little under half of the CVEs issued for V8 are relate to the JIT compiler and more than half of all ‘in-the-wild’ Chrome exploits abuse JIT bugs. (Modern versions of Edge are based on the same Chromium code as Google’s Chrome browser, so Chrome exploits also affect Edge.) Microsoft is wondering out loud if the simplest way to deal with such a problematic sub-system is to just disable it and see where it takes them.

Disabling JIT compilation comes at a price though: speed. JIT compilation is a performance feature that speeds up the execution of JavaScript, the most popular programming language used on the web. Because it sits behind so many web applications, the speed that JavaScript runs has a direct effect on how fast and responsive web applications are.

We were curious just how big an effect it would have.

What is JIT compilation?

A good definition of JIT compilation is this one:

“Just-in-time (JIT) compilation … is a way of executing computer code that involves compilation during execution of a program (at run time) rather than before execution.”

The reason to use JIT compilation is simple: speed. JIT compilation combines the speed of compiled code with the flexibility of interpretation. It allows for more optimized code to be generated. And to limit the overhead, many JIT compilers only compile the code paths that are frequently used.

V8 is Google’s open source high-performance JavaScript and WebAssembly engine, written in C++. It is used in Chrome and in Node.js, among others. Since Edge is based on Chromium it uses V8 as well.

The speed impact of disabling Edge’s JIT compiler

We ran a few quick tests to see how big the impact of disabling JIT would be. To run these tests we compared the latest official release of Edge (Version 92.0.902.67) with the latest available Microsoft Edge Beta (Version 93.0.961.11) with Super Duper Secure Mode enabled and disabled. We found that the speed differences between the latest official release and the beta were marginal, so we we have left those out of the results.

The tests were done in a VM on a slow connection. As a benchmark we used Sunspider 1.0.2. We wanted to try the more elaborate JetStream2, but for some reason that never made it to the end. (If you get it to work with JetStream2, we’d love to hear from you.)

Sunspider says its benchmarking focusses “on the kinds of actual problems developers solve with JavaScript today”, is “balanced between different areas of the [JavaScript] language”, and runs each test multiple times to determine a 95% confidence interval and whether you have a statistically significant result.

Test SDSM Enabled SDSM disabled Speed up
3d 76.7ms +/- 3.4% 59.2ms +/- 3.6% 1.3x
access 102.0ms +/- 0.8% 33.7ms +/- 4.1% 3.03x
bitops 98.4ms +/- 1.0% 17.1ms +/- 3.7% 5.75x
controlflow 9.1ms +/- 2.5% 5.6ms +/- 6.6% 1.63x
crypto 46.0ms +/- 1.5% 37.9ms +/- 8.1% 1.21x
date 23.6ms +/- 1.6% 26.9ms +/- 2.0% 1.14x
math 61.4ms +/- 1.5% 28.6ms +/- 2.4% 2.15x
regexp 36.0ms +/- 2.1% 5.6ms +/- 6.6% 6.43x
string 70.1ms +/- 2.2% 63.2ms +/- 2.1% 1.109x
Total 523.3ms +/- 0.6% 277.8ms +/- 1.9% 1.88x
SunSpider 1.0.2 JavaScript Benchmark Results comparing Microsoft Edge Beta (Version 93.0.961.11) with Super Duper Secure Mode enabled and disabled. All the results were statistically significant.

Our results show that enabling the JIT speeds up JavaScript execution in Edge by a factor of 1.88. So disabling JIT compilation makes Edge’s JavaScript processing more secure, but almost twice as slow.

A few remarks I want to make before you do:

  • The benchmark tests the core JavaScript language only and many more things affect the speed of the web than JavaScript execution. So this does not mean that normal surfing will be twice as slow!
  • I repeated the tests several times and while there were some differences the general comparison was roughly the same every time. (Results varied between a 1.87x and 1.90x speed up when JIT compilation was enabled.)

Microsoft claims it found that users using Super Duper Secure Mode rarely notice a difference in their daily browsing. It will probably depend on the type of site(s) you’re visiting, what else you’re doing at the time etc, but it is worth noting that tools that measure web performance, including Google’s Core Web Vitals, attach great importance to JavaScript because slow JavaScript can have such a profound effect on user experience.

Not without a replacement

Regardless, history teaches us that simply disabling the V8 JIT compiler is not going to be a long-term solution. The first advice anyone would get on a computer forum if they complained about a slow browsing experience is going to sound like “enable JIT”. We think we can predict this with great confidence based on similar experience with anti-virus software.

The general public is not going to trade in speed for security. So Microsoft will eventually have to provide people with an alternative. What are the alternatives? It could decide to fix V8 and address whatever the root cause of the V8 bugs is. If it turns to another JavaScript engine entirely, it has a choice of perhaps four: Chakra or ChakraCore, free and open-source JavaScript engines developed by Microsoft for its Edge Legacy web browser; Duktape; or Moddable.

And there are a few more, but realistically speaking, for Microsoft to adapt or adopt one of these engines for Edge would mean to turn a way from Chromium, which it has only recently turned to. It seems unlikely that it will immediately create a “hard fork” so to speak. For now the goal of the Super Duper Secure Mode experiment is to raise the bar for attackers.

The security problems of JIT compilation

As we mentioned earlier, disabling JIT compilation in Edge reduces the number of options that an attacker has (known as reducing the attack surface). But another problem with JIT compilation is that it is incompatible with some mitigation technologies. The Microsoft Vulnerability Research team mentions a few security features that can’t be used when JIT is enabled:

  • Control-flow Enforcement Technology (CET) a hardware-based exploit mitigation from Intel. Intel has been actively collaborating with Microsoft and other industry partners to address control-flow hijacking by using this technology to stop attackers from using existing code running from executable memory in a creative way to change program behavior.
  • Arbitrary Code Guard (ACG) helps protect against a malicious attacker loading the code of their choice into memory through a memory safety vulnerability and being able to execute that code. Arbitrary code guard prevents allocating any memory as executable, which presents a compatibility issue with approaches such as Just-in-Time (JIT) compilers.

We are thrilled that Microsoft is looking at raising the security standard of its Edge Browser. After a unprecedented number of Chrome zero-days in 2021, and a number of high profile security incidents related to several Microsoft products this is a welcome change of pace.

Try it yourself

Users that want to try Super Duper Secure Mode for themselves will have to get hold of one of the Microsoft Edge preview releases (Beta, Dev, or Canary). If you have one of these running your can insert edge://flags/#edge-enable-super-duper-secure-mode into the address bar of the browser and set the new feature to “Enabled”.

enabled

Since this is an experiment we don’t have to take the name Super Duper Secure Mode very seriously. It’s probably not here to stay and may be an indication of how likely it is that disabling the JIT compiler without a replacement will become mainstream.

Stay safe, everyone!

The post Edge’s Super Duper Secure Mode benchmarked: How much speed would you trade for security? appeared first on Malwarebytes Labs.

What is Tor?

Tor, The Onion Router

Tor (The Onion Router) is free software used to keep your online communications safe and secure from outside observers. It’s designed to block tracking and eavesdropping, resist fingerprinting (where services tie your browser and device information to an identity), and to hide the location of the people using it.

The network of websites and services that are only accessible using Tor is often referred to as “The Dark Web” or, more correctly, “The Dark Net”. Although the Dark Web has a reputation for being a place where criminal activity takes place there is nothing intrinsically bad or criminal about Tor. In fact, it was originally created to keep US intelligence communications safe. If your primary concern online is to try and stay anonymous, this is something you’d turn to.

How Tor works

Tor uses layers of encryption to keep your traffic secure. (It’s called “onion” routing because it has multiple layers, like an onion.) Traffic passes through random servers (or nodes) kept running by, well, anybody. You won’t know who is responsible for running the nodes, and the nodes don’t know, and can’t see, what traffic is passing through them.

By default, traffic passes through three nodes, called a Circuit, and the nodes in the Circuit are changed every ten minutes. Each node peels back one layer of encryption. The encryption ensures that each node is only aware of the node that came before it and the node that comes after it. Tor uses three nodes in a circuit because it’s the smallest number of nodes that ensures no point in the system can know both where your traffic originated and where it’s eventually going.

Tor can either be used to access services on the regular Internet or services that are also hidden behind Tor. If you use Tor to access the Internet your Circuit of three nodes acts like an anonymous and very secure Virtual Private Network (VPN) that hides your IP address from the things you use. If you use Tor to access other services that are also hidden by Tor then neither side of the communication can see the IP address of the other.

There are numerous ways to use Tor. You can configure your computer so that all of its communications use the Tor network, or you can use individual applications that make use of it, like the Tor Messenger, launched in 2015. Most people’s first, and perhaps only, experience of Tor is via the appropriately named Tor browser though, which is used for secure web browsing both on the regular web and the Dark Web. As a result, that’s what we’ll focus on below.

The Tor browser

The Tor Browser, which began development in 2008, is a web browser with multiple security and privacy options built in by default. A modded Firefox browser, it connects to the Internet using Tor, and comes with the NoScript and HTTPS Everywhere plugins pre-installed. It also has a number of security defaults cranked up to eleven, to prevent things like browser fingerprinting. It can be used for browsing regular websites securely, or for browsing websites on the Dark Web.

As far as the default operations of the Tor Browser go, NoScript allows active content for trusted domains only. In practice, what this means is that (for example) a site you’re visiting for the first time won’t be allowed to run JavaScript until you allow it.

HTTPS Everywhere helps by ensuring that you don’t accidentally connect to websites using the unencrypted HTTP protocol.

The Security Level settings, available via the browser’s preferences, allows users to customise a wealth of security options, or choose a default.

security level
Tor Browser’s Security Level screen

The default Standard option enables all Tor browser and website features. Safer disables a number of common website options, such as JavaScript on non-HTTPs sites. Audio and video are click to play. Safest “only allows website features required for static sites and basic services. These changes affect images, media, and scripts. In other words, it’s as bare bones a web experience as you’re likely to have. Many sites simply will not function. There’s a big trade off in functionality for security here, and casual users probably won’t have much interest in this.

Possible risks of using Tor

The fact that anyone can run a Tor node is a feature, but it’s also a possible threat. There’s no guarantee the person running a node isn’t a rogue entity and the total number of nodes is relatively small: Just a few thousand. Although Tor is designed to be resistant to snooping nodes, the last node in a Circuit (known as an Exit node) can be used for spying on traffic that is leaving Tor and joining the regular Internet.

Rogue / snooping exit nodes are definitely a concern. Law enforcement also definitely takes an interest in this area, so temper your expectations appropriately.

Law enforcement or threat actors that are present on a large number of nodes can also theoretically run “correlation attacks”. These undo Tor users’ anonymity by trying to match up traffic entering the Tor network with traffic leaving the Tor network, based on things like timing. Tor isn’t perfect, but it hugely increases the time and effort an adversary would have to expend to spy on you.

One school of thought commonly seen online suggests using Tor in the interest of anonymity makes you stand out and is akin to firing a large “I AM HERE” flare gun into the sky. While this may be true in some cases, for most people using it this probably isn’t an issue.

By comparison, people using a VPN are probably more interested in privacy than anonymity. A VPN is run by a single organisation, as opposed to bouncing you through lots of random nodes maintained by complete strangers. Because Tor uses more nodes and more encryption than a VPN it is normally slower.

VPNs can also be compromised, and user data put up for grabs. Nothing is 100% guaranteed to be secure, and that holds true here whether using VPNs or Tor. It’s up to users to pick the option most suited to their needs, and account for things potentially going wrong.

That isn’t to dissuade you from using either service; if you’re considering using either, there’s likely a valid need for it. In practical terms a little boost in anonymity and / or privacy can only be a good thing, so get a feel for what options are available and stay safe regardless of your ultimate choice.

The post What is Tor? appeared first on Malwarebytes Labs.

Amazon will pay you $10 for your palm prints. Should you be worried?

Retail giant Amazon recently offered to pay $10 USD for your palm prints. Would you offer them your hand?

Many seem to home in and seethe over the price being too little for something as priceless and unique as their palm print, not realizing that when it does come to registering biometric data in general, everyone gives their prints away for free.

Palm print prices aside, Amazon is definitely encouraging current and potential customers to to enrol their prints using Amazon One, its new contactless identity service.

amazon one
Amazon One is Amazon’s palm-powered contactless identity service

Amazon One was introduced in September 2020 as (according to Dilip Kumar, Vice President of Physical Retail & Technology for Amazon in an official post) “a quick, reliable, and secure way for people to identify themselves or authorize a transaction while moving seamlessly through their day.” The announcement came in the thick of the Covid-19 pandemic, which seemed to give it a boost due to its non-contact nature.

Since then, Amazon has rolled out Amazon One to more of its stores in the Seattle area and beyond. This biometric scanner can now be found in use in Amazon Books, Amazon Go convenience stores, Amazon Go Grocery, and Amazon 4-star stores in various US states, including Maryland, New Jersey, New York, and Texas.

How does it work?

Amazon says it scans and captures the minutest detail of a palm, which includes ridges, lines, and features under the skin like vein patterns, to create a unique palm signature. Why palm prints, you ask? In the FAQ section here, Amazon claim that “palm recognition is considered more private than some biometric alternatives because you can’t determine a person’s identity by looking at an image of their palm.”

To a degree, this is true. It’s certainly less obviously personally identifiable than face recognition and it’s difficult to take a photo of someone’s palm and use that to spoof anything. But, like fingerprints, latent palm prints can also be lifted or picked up from touched objects, making it a viable way to help identify an individual. In fact, the forensic science community generally accepts palm prints as positive identification.

Palm signatures are created, encrypted, and stored in the cloud. Palm images, card details, and phone numbers are also never stored in the Amazon One device, and (the company further claims) they are “protected at all times, both at rest and in-transit”. How these palm signatures are encrypted, Amazon didn’t specify. They also didn’t say if they comply with current standards for capturing, exchanging, and storing biometric data.

Amazon is well capable of creating a very secure system, but any plan to create a centralized repository of authentication information should give us pause. Particularly if that information is biometrics that can’t be changed if they’re leaked or breached. It is the opposite of the approach being taken by FIDO2, for example, a passwordless authentication scheme that can be used with biometrics without the biometric data ever leaving its owner’s control.

Amazon stores palm data indefinitely, unless someone manually deletes it from their profile or if the member doesn’t use the feature for two years.

Becoming a transactional tool

Critics have pointed out that having our palms scanned for increased convenience and quick(er) closing of transactions is unnecessary when a contactless payment card can do the exact same thing. And, unlike a palm print, a payment card can be easily changed if it’s compromised. Worse, with our biometric data in its hands, Amazon can essentially do what it wants with it—and this could go beyond targeted advertising, considering that Amazon has already opened its doors to third-party companies who are interested in making Amazon One a part of their business.

It’s not a long shot to imagine that the retail giant could very well involve law enforcement once again: either selling them the biometric recognition service/technology or working with them for the purpose of surveillance, both of which Amazon has done in the past.

What particularly concerns Elizabeth Renieris, a lawyer and policy expert on data governance, is how Amazon is tying you as a person, via your palm print, to your shopping habits and purchase history. She said in an interview with The Verge last year: “The closest thing we have now is things like Apple Wallet and Apple Pay and other device-based payments infrastructure, but I just think, philosophically and ethically, there’s extreme value in having a physical separation between your transaction infrastructure and your physical self—your personhood and your body. As we merge the two…a lot of the rights that are based on the boundedness of a person are further threatened.”

“Your physical self is literally becoming a transactional tool,” she said.

The post Amazon will pay you $10 for your palm prints. Should you be worried? appeared first on Malwarebytes Labs.

COVID-19 vaccine appointment system attacked in Italy

In another cyberattack on a healthcare system, threat-actors have tried to throw a wrench into the ongoing COVID-19 vaccine roll-out in the region of Lazio, Italy. The large and densely populated region is the country’s second most populous and includes the country’s capital, Rome.

On Sunday the Facebook page of the region informed the public that hackers had disabled the systems of the regional health care agency.

Lazio's Facebook page warns of a "hacker attack" on its systems

Only 10 hours later the region communicated through the same channel that standing vaccination appointments could proceed as planned. But it was not yet possible to make new appointments. Later it turned out that besides the vaccination appointment system, more of the region’s systems had suffered from the attack.

The attack

Details of the attack are sparse, most likely because the investigation is still ongoing. The Facebook page mentions a “virus” but this could be the result of a common misconception where many people call every malware a virus. But there is no mention anywhere about a ransom either, which you would expect if this was yet another ransomware attack on healthcare or other critical infrastructure. What we do know is that it was labelled as a “powerful” attack that disabled all the region’s systems, including the information site Salute Lazio portal, which was still unreachable at the time of writing.

Unofficial sources claim to know that the attackers managed to get hold of the credentials for an administrator’s account and released a “cryptolocker” which would suggest that this was a ransomware attack, or possibly a “wiper” attack, where attackers use ransomware to scramble a target’s computer, but with no intention of asking for a ransom or providing a way to unscramble them. The investigation will be  done by the Italian Postal and Communications Police Service which is the police department responsible for cybercrime.

Attackers

The region’s officials have called the attackers both criminals and terrorists. The question which of the two qualifications is the most accurate is closely correlated with the nature of the attack. There have been a lot of protests in Italy against the introduction of the so-called Green Pass, which shows people have been vaccinated, tested negative or recovered from COVID-19. Based on the Green Pass, which comes into effect on 6 August, holders will have access to places where non-holders will be barred.

While some see the Green Pass as a way to increase vaccination rates and persuade the undecided, some see it as a step too far. Looking at the number of vaccination requests the persuasion technique seems to work. Which might have triggered this attack on the Lazio region’s systems. But it might just turn out to be the next ransomware or wiper attack (although this scenario would be very surprising).

Recovery

Even though most IT systems were offline, some have been restored, including emergency networks, time-dependent networks, and hospital systems. The local government has reiterated that the vaccination drives would continue in spite of the attack. The vaccination appointment system for the Lazio region has been transferred to the Italian national vaccination to keep the momentum going.

Critical infrastructure

The disruption of Lazio’s vaccine appointment system is just one of a number of notable and disturbing attacks against critical infrastructure in 2021. To learn more about the threat cybercriminals pose to critical infrastructure, Lock and Code podcast host David Ruiz spoke to Lesley Carhart, principal threat hunter with Dragos and a globally-respected expert on the subject.

You can hear their conversation below, or find it on your preferred platform, including Apple PodcastsSpotify, and Google Podcasts.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

The post COVID-19 vaccine appointment system attacked in Italy appeared first on Malwarebytes Labs.

Chrome casts away the padlock—is it good riddance or farewell?

It’s been an interesting journey for security messaging where browsers are concerned. Back in the day, many of the websites you’d visit on a daily basis weren’t secure. By secure, I mean that they didn’t use HTTPS. There was no padlock, which meant that the traffic between you and the website wasn’t encrypted, and so it was vulnerable to being snooped on or changed.

Sites you bought goods from tended to use HTTPS so your credit card number couldn’t be intercepted as it traversed the Internet. But random blogs? Forums? Information portals? Not so much. People would often say that it wasn’t really dangerous if blogs or information sites weren’t using HTTPS. No personal data was being sent or received, no purchases were being made. What’s the big deal?

And yes, that sort of makes sense up to a point. However, as the Mozilla blog highlights, anyone on the network can potentially read or modify a website’s data before it reaches you. This is a bad thing whether or not you’re shopping or simply looking at humorous memes. It means that bad actors can insert ads into the pages you see, add malware to your downloads or redirect you to fake versions of the sites you want to visit.

Why was it so difficult to make sites HTTPS?

Cost factored into this. HTTPS certificates and setup were pricey, many thought HTTPS impacted performance, and the benefits of using it were often unclear.

In theory it was possible to get yourself a free HTTPS service as far back as 8 to 10 years ago, but in practice it’d often only be free for a year or the service might not be very good. On top of this, HTTPS was a pain to set up and non-technical site owners stood little chance of doing it themselves.

As an example: Until relatively recently, if you used a Google Blogspot blog with a custom domain (a domain hosted elsewhere) you couldn’t use HTTPs.

Worse, Google decided to ding search rankings for sites not running HTTPS and also mark them as “not secure”. Imagine having your blog on Google’s Blogspot, and Google both not offering HTTPS and penalising you for not using HTTPS!

In short, many were the obstacles for getting HTTPS up and running. If you managed to dodge the cost bullet back in the day, there was still the complexity bullet waiting in the wings. Considering many folks run different services with different orgs based on price, functionality, and location, firing up HTTPS on your site was not an easy task. For every person dismissing concerns with “that’s easy”, you could find a dozen more who follow all the steps and end up with something not working regardless.

What changed?

Google’s decision to penalise the search rankings of non-HTTPS sites was just one of a raft of carrots and sticks that emerged in the second half of the last decade which pushed HTTPS adoption from the niche to the mainstream.

The trend started about ten years ago with Firesheep, a browser plugin that shamed the big social media sites for not using HTTPS, before being accelerated enormously by Edward Snowden revealing the vast scale of Internet surveillance.

The response came in many forms including changes to search engines, free HTTPS services, better web hosting, new web protocols, and, as we’ll see later, web browsers.

A fanfare and a big parade for the good stuff

There is a tendency in security to focus on good security practices as unusual and point-from-across-the-street jaw dropping, and bad security practices as commonplace and not-very-pointworthy.

There’s a few problems with this. We’ll return to the ubiquitous padlock as an example.

If you go back a few years, when padlocks on sites became the Latest Big Deal(™), it was an instant banner moment for “this is definitely a good thing”. You couldn’t move in some circles for endless promotion of the padlock, our new hero.

And hey, the padlock most definitely is good! However, it doesn’t mean it’s not also used for evil deeds.

A common talking point is “Look for the padlock. If it’s there, it means the site is safe”.

Well, no. It means it’s secure in as much as the data can’t be peeked at by random observers. The data can still be received as intended by an evil site owner. What do you think this means?

It means, lots of phishers and scammers started setting up HTTPs websites.

As a result of the “Padlock = safe” messaging at the time, some would-be victims probably didn’t land on a fake bank site and think “Oh my, this looks dubious. Time to leave. Nice try, phishers”. They likely assumed “The design is slightly off, but it’s got a padlock and everyone told me a padlock means that it’s safe. Phew.”

This problem became even worse once free HTTPS became popular and affordable, if not completely free for most folks. All of a sudden, we had lots of security resources and training saying “padlock means it’s the real deal” while lots of fake sites started sporting padlocks.

This is a small example of how hyping up a secure feature which becomes available to all, can go wrong without the consideration that bad people will use them too.

Back to browsers.

A fanfare and big parade for the bad stuff

Chrome is following the trend of rolling back on “this is secure” notifications, which as we’ve seen may not be 100% helpful, in favour of just saying “This one here is not secure. Avoid it”. This mirrors (deliberately) the change the web has undergone HTTPS being unusual to being the norm.

Sure, people trained to look for padlocks are suddenly not able to view them. On the other hand, padlock sites are so common now that seeing the padlock isn’t that useful anymore.

From now on, it’s business as usual until Chrome says BAD SITE.

This isn’t a recent plan, by any means. If nothing else, Chrome tends to give lots of advance warning of changes coming up so people can adjust for them as needed. It first mentioned this move way back in 2018, when it said “Users should expect that the web is safe by default”. To be more specific, they decided to change what displayed in your URL bar as follows:

[Padlock icon] | [Secure],

They stated their aim to remove the word “secure” so you’d only see

[Padlock icon],

Which would eventually lead to their final intended state of

[Website]

…with no padlock, or the word “Secure” next to it. Do you remember when Chrome used to say “Secure” next to the padlock indicator? I don’t! It hasn’t hurt my browsing in any way. I strongly suspect the padlock icon being removed won’t harm it either. Why continually highlight the norm, if the norm is functioning as expected?

It seems more sensible to hang a big sign saying “This is bad” on the bad stuff and call it a day. It’ll be interesting to see which browsers follow suit, assuming there’s some out there which haven’t already taken this step. If you’re a Chrome user, don’t be alarmed when our little padlock pal takes its last steps into the distance.

The post Chrome casts away the padlock—is it good riddance or farewell? appeared first on Malwarebytes Labs.

NSA issues advice for securing wireless devices

By releasing an information sheet that provides guidance on securing wireless devices while in public (pdf)—for National Security System, Department of Defense, and Defense Industrial Base teleworkers—the NSA has provided useful information on malicious techniques used by cyber actors, and ways to protect against them.

And anyone that does not belong to that group of teleworkers can still take advantage of the knowledge it has shared!

While the NSA’s advice and best practices aren’t a guarantee of protection, they will hep to reduce the risks you face while you’re out and about. The most obvious advice in the information sheet is not to use public Wi-Fi hotspots when more secure options are available. Use a corporate or personal Wi-Fi hotspot with strong authentication and encryption whenever possible, use HTTPS and a VPN when it isn’t.

Wi-Fi and encryption

Even if a public Wi-Fi network requires a password, it might not encrypt traffic going over it. And even if the Wi-Fi network does encrypt the data, malicious actors can decrypt the captured data if they know the pre-shared key. In either case, the network traffic (including login credentials) is easily captured using a couple of methods:

Masquerading

Masquerading occurs when the name or location of an object, legitimate or malicious, is manipulated to evade defenses and/or observation. For example, in this context it means that a cyber-criminal might broadcast an SSID (the name of a wireless network) that looks legitimate just to trick you into using their Wi-Fi hotspot.

As we have discussed before, anyone can spoof a well-known SSID and your device will happily connect to it again if it’s connected to an open SSID with the same name before. Once you are connected to this malicious hotspot masquerading as one you’ve used before, its operator can redirect you to malicious websites, inject malware or ads, and spy on your network traffic.

Network sniffing

Network sniffing refers to using the network interface on a system to monitor or capture information sent over a wired or wireless connection. Adversaries can join a network and “sniff” the traffic passing over the wireless network, capturing information about the environment and the traffic of other Wi-Fi users, including authentication material passed over the network.

Please encrypt your traffic

You can’t stop masquerading or network sniffing, but you can make the useless to an attacker by adding a layer of encryption to your traffic with a VPN. The NSA strongly advises using a personal or corporate-provided virtual private network (VPN) to encrypt the traffic.

Other interfaces

The NSA rightly warns that in addition to Wi-Fi, cyber actors may also compromise other common wireless technologies, such as Bluetooth and Near Field Communications (NFC). The risk isn’t merely theoretical since these malicious techniques are publicly known and in use.

NFC

NFC is the technology behind contactless payments and other close device-to-device data transfers. As with any network protocol, there may be NFC vulnerabilities that can be exploited although due to NFC range limitations, opportunities to exploit vulnerabilities are limited.

Bluetooth

Bluetooth technology transmits data wirelessly over short distances. Keeping a device’s Bluetooth feature enabled in public can be risky. Malicious actors can scan for active Bluetooth signals, potentially giving them access to information about a targeted device.

The NSA highlights a few specific Bluetooth related attack techniques:

  • Bluejacking, sending unsolicited messages (often unsolicited anatomical pictures sent to women) over Bluetooth to Bluetooth-enabled devices such as mobile phones, PDAs or laptop computers.
  • Bluesnarfing, the unauthorized access of information from a wireless device through a Bluetooth connection, often between phones, desktops, laptops, and PDAs.
  • Bluebugging manipulates a target phone into compromising its security, this to create a backdoor attack before returning control of the phone to its owner.
  • Blueborne, a Bluetooth vulnerability that can allow malicious actors complete control over a user’s Bluetooth device.

Do’s and don’ts

The information sheet goes on to provide some do’s and don’ts. Most of them are very generic and you will probably have read them many times before. We are sure we have listed them ourselves time and again.

That doesn’t mean they’re bad advice though, and it suggests that some people aren’t paying close enough attention, so here goes:

Mobile devices

  • Keep software and applications updated with the latest patches.
  • Use anti-virus/anti-malware software.
  • Use multi-factor authentication (MFA) whenever possible.
  • Reboot regularly, especially for mobile phones after using untrusted Wi-Fi. (Rebooting a device will remove non-persistent threats from memory.)
  • Do not leave devices unattended in public settings.
  • Do not use personal information—like your name—in the names of the devices.

Wi-Fi

  • Disable Wi-Fi when you aren’t using it.
  • Disable Wi-Fi network auto-connect.
  • Ensure your device is connecting to the correct network.
  • Log out of the public Wi-Fi network and “Forget” the access point when you’re finished.
  • Use HTTPS where you can.
  • Only browse to, or use, necessary websites and accounts.
  • Do not connect to open Wi-Fi hotspots.
  • Do not enter sensitive data or conversations.
  • Do not click unexpected links, attachments, or pop-ups.
  • Do not set public Wi-Fi networks to be trusted networks.
  • Do not browse the Internet using the administrator’s account of the device.

Bluetooth

  • Disable the Bluetooth feature when it is not being used.
  • Ensure the device is not left in discovery mode when Bluetooth is activated and discovery is not needed.
  • Monitor Bluetooth connections by periodically checking what devices are currently connected to the device.
  • Do not use Bluetooth to communicate passwords or sensitive data.
  • Do not accept non-initiated pairing attempts.
  • Use an allow-list or deny-list of applications that can use the device’s Bluetooth.

NFC

  • Disable NFC feature when not needed (if possible).
  • Do not bring devices near other unknown electronic devices. (This can trigger automatic communication.)
  • Do not use NFC to communicate passwords or sensitive data.

More advanced advice for laptops

While it may seem trivial for the NSA to provide guidance in this field, since most security professionals have given up hope that we’ll ever learn, it just may be that when it comes from a source like the NSA people might actually start paying attention. So, while most of the advice will look familiar, hearing it for the umpteenth time might actually persuade someone to follow it.

Stay safe, everyone!

The post NSA issues advice for securing wireless devices appeared first on Malwarebytes Labs.

Zoom and gloom? Video comms org agrees to settle for $85m

Zoom has agreed to an $85m settlement regarding privacy, zoom-bombing, and data sharing. The class action privacy lawsuit filed in the US against the embattled company wasn’t particularly impressed with the following:

  • Zoom-bombing running wild in video sessions. Zoom-bombing, the practice of joining sessions without permission and causing mayhem, exploded into life during 2020.
  • Claiming to offer end-to-end encryption, when they were using something called transport encryption in places. They later had to clarify that they meant data was encrypted at Zoom endpoints. In theory, the company could access the data but said they don’t directly access it.
  • Sharing data with social media companies even if you don’t have an account with them. Zoom used Facebook’s Software Development Kit for app features, which resulted in data being sent to Facebook. The part about data being sent even without an account wasn’t made clear, according to Motherboard. As a result of the linked investigation, Zoom decided to remove the Facebook SDK. They also apologised for the oversight, and shut down “unnecessary device data” collection.

Interestingly, one part of the settlement is a request for Facebook to delete US user data obtained via the SDK.

The numbers game

How badly have Zoom done off the back of this settlement? Well, it’s complicated. It essentially boils down to around $15 for people without subscriptions, or $25 for folks with pricier accounts. It’s worth noting these amounts are specifically for US-based Zoom users, with a few exceptions. If you’re using Zoom outside of the US, you almost certainly won’t be getting fractionally rich from this one. Sorry!

As for Zoom, your mileage will definitely vary as to whether or not you think these costs are sufficient. According to reports, they made around $1.3 billion in subscriptions from paying US customers. The plaintiff’s legal team says the $85m is “reasonable” considering other costs tied to legal action. They’re also seeking $21.3m in legal fees from Zoom.

A fitting punishment?

Is it reasonable, though? Or should the total be higher? According to The Register, the $85m amount is “around 6% of the total revenues collected based on allegedly unlawful activities”. In many ways, Zoom wandered into a metaphorical gunfight they couldn’t hope to put a lid on. Nobody could’ve predicted the pandemic, or the massive shift to working from home. Much less which remote communication tools would rise or fall as a result. It just so happened the fates aligned and picked Zoom. It’s arguable no company could have weathered such a dramatic spike in users and rapid-fire improvements. It’s also arguable many issues could’ve been avoided if Zoom had shown a little more foresight, instead of seemingly playing catchup a lot of the time. 

Trolls have been crashing private forums, chat sessions, web-chats and anything else they can get their hands on for years. Was it really a surprise the same would happen to Zoom sessions? Was a tipping point required before passwords and waiting rooms were enabled by default for all meetings? Biometrics, tracking, and monitoring people working at home is increasingly frowned upon. Did anyone really think features like Attention Tracking would be popular?

A hard lesson learned, but some may feel the lesson should have been much harder.

How do users get their money?

It’s still being worked out, and you’ll almost certainly need to see who qualifies and who doesn’t. The current plan is to apply for awards through a specific website. It’s likely there’ll be imitation pages and phishing mails aplenty once it goes live. It remains to be seen how many people will actually apply, and some aspects of the case aren’t fully hammered out yet so we’ll likely revisit this one come October.

The post Zoom and gloom? Video comms org agrees to settle for $85m appeared first on Malwarebytes Labs.

RDP brute force attacks explained

While you read these words, the chances are that somebody, somewhere, is trying to break in to your computer by guessing your password. If your computer is connected to the Internet it can be found, quickly, and if it can be found, somebody will try to break in.

And it isn’t like the movies. The criminal hacker trying to guess your password isn’t sat in a darkened room wondering which of your pets’ names to type on their keyboard. The hacker’s at lunch and they’ve left a computer program churning away, relentlessly trying every password it can think of. And computers can think of a lot of passwords.

Oh, and there are lots of hackers out there and they don’t take turns trying to break into your computer one at a time. They’re all trying to break in separately, all at the same time.

While there are lots of ways to break into a computer that’s connected to the Internet, one of the most popular targets is the Remote Desktop Protocol (RDP), a feature of Microsoft Windows that allows somebody to use it remotely. It’s a front door to your computer that can be opened from the Internet by anyone with the right password.

rdp login
The login screen of a Windows computer in Rome, Italy, that the author found on the Internet. Its owner may not be aware that it’s running RDP but hackers will be.

RDP explained

RDP is an immensely useful feature: Remote workers can use it to log in to computers physically located at their office buildings, and IT experts can use it to fix somebody’s computer from anywhere in the world. However, that ability to log in to a computer from anywhere in the world also makes RDP an immensely attractive target for criminal hackers looking to steal data or spread malware.

To log in to a computer using RDP, users simply type in the Internet address a computer running and enter their username and password. Then they can use it as if it was on the desk in front of them. It’s that simple.

Reading that you might think that you’re safe as long as nobody knows your computer’s address. Unfortunately, your computer’s address isn’t a secret, even if you’ve never told anyone what it is.

Internet addresses (IP addresses) are just sequences of numbers, so it’s easy to create computer programs that guess all the possible IP addresses in existence and then quickly visit them to see if they belong to Windows computers with RDP switched on, over and over again. And if that’s too much work, there are websites like Shodan that can do it for them.

shodan search for computers running rdp with screenshots
The search results page of the Shodan search engine showing Internet-connected computers running RDP.

Searches like this make it easy for hackers to find the computers they want to target, but they don’t help them break in. To do that they will need to guess a password successfully.

Brute force guessing explained

Hackers figure out RDP passwords using a technique called “brute force guessing,” which is as basic as it sounds. They simply use a computer program that will try a password and see if it works. If it doesn’t, it will try another, and another, and another, until it guesses a password correctly or decides it’s time to try its list of passwords on a different computer. The guesses aren’t random. Some passwords are far more popular than others, so criminals use lists of the most commonly used passwords, starting with the most popular.

Unfortunately, weak passwords are extremely common. In fact, they’re so common that there is an entire criminal industry dedicated to guessing RDP passwords. Often, the hackers that guess the passwords don’t actually use them. Instead, they sell them on the Dark Web to other criminals, at an average price of just $3 per account.

Imagine how many $3 passwords a hacker has to sell to make it pay its way and you’ll get a sense for how big this problem is.

There are numerous groups scanning the Internet and trying to guess RDP passwords. Some hang around, making tens of thousands of guesses at the same computer, while others will try just a few guesses before moving on to another target. At any one time your computer might have the attention of multiple groups, all employing slightly different tactics to guess your password. And they never get bored. Even if they appear to give up, they’ll return later, to see if anything has changed or to try something new.

Before the COVID-19 pandemic, RDP was already the go-to method for spreading ransomware, and it had been for several years. Because of that, I co-authored some research into RDP brute force attacks in 2019. Our research was simple: We connected some Windows computers to the Internet, turned on RDP, and waited to see what happened.

We didn’t have to wait long. Hackers started trying to guess our test computers’ passwords within 90 seconds of them being attached to the Internet. Over the course of a month our test computers were probed with password attempts all day, every day, 600 times an hour.

And that was before things got really serious, in 2020.

The pandemic triggered a huge surge in the number of people working from home. In turn, that triggered a surge in the number of people relying on RDP. Because guessing RDP credentials and selling them was already a viable underground business, the criminal infrastructure to take advantage of remote workers was already in place. The result was a colossal increase in the number of attacks on computers running RDP.

Stopping RDP brute force attacks

RDP brute force attacks represent a serious, on-going danger to Internet-connected Windows computers. However, there are a number of ways to protect yourself against them. As in all areas of computer security, defence in depth is the best approach, so aim to do as many things on this list as you reasonably can.

  • Turn it off. The simplest way to protect yourself from RDP brute force attacks is to just turn off RDP permanently, if you don’t need it.
  • Use a strong password. Brute force attacks exploit weak passwords so in theory a strong password is enough to keep attackers out. In practice, users often overestimate how strong their passwords are, and even technically strong passwords can be rendered useless if they are stolen or leaked. For those and other reasons it’s best to use at least one of the other methods in this list too.
  • Use a VPN. RDP can be protected from brute force attacks by forcing users connect to it over a Virtual Private Network (VPN). This hides RDP from the Internet but exposes the VPN, leaving it vulnerable to attack, so it also needs to be properly secured. This is the approach taken by a lot of organizations, but is likely to be beyond the patience or technical ability of many home users.
  • Use multi-factor authentication (MFA). MFA forces users to provide multiple forms of authentication in order to log in, such as a password and a one-time code from an app. MFA offers very strong protection because even if an attacker guesses a password, it isn’t enough to give them access. Like VPNs, MFA solutions can be complex and are often aimed at business users.
  • Limit the number of guesses. The simplest way to lock out brute force attackers is to limit the number of password guesses they can make. If a legitimate user gets their password wrong, they normally only need a few extra guesses to get it right. There is no need to give somebody the luxury of making tens- or hundreds-of-thousands of guesses if you only need a handful. Locking out users who make too many wrong guesses, or limiting the number of guesses users can make has the effect of making weak passwords much, much stronger. (It’s how bank cards and smartphones get away with using simple four- or six-digit PINs to protect themselves.)

The post RDP brute force attacks explained appeared first on Malwarebytes Labs.

The 3 biggest threats reaching for your antivirus software’s off switch

Having antivirus (AV) software on your computer is a staple. Modern antivirus offers layered protection—a cybersecurity approach that uses multiple techniques in one package to keep you safe if you download a malicious file from the Internet, find yourself worrying after clicking a link on a direct message from a non-contact on social media, or automatically open an email attachment before you can stop yourself.

An excellent AV saves you from unnecessary worry because it works. It stops bad things. And that’s why so many people want to turn it off.

Some of the reasons are obvious—while some, not so. We find out what these reasons are by listing the three likely culprits behind your AV mysteriously being off the next time you use your computer.

1. Hackers

Ransomware has been in the news this year, but it’s been a serious threat for several years now. What many users may not realize is that ransomware attacks from a few years ago were quite different from your “common-or-garden” ransomware we see now.

A few years ago, ransomware was typically sent out in mass email campaigns. The criminals behind it hoped to catch out as many unsuspecting users as possible and charged each victim a ransom of a few hundred dollars to remove the ransomware from their computer. It was hugely inconvenient but it was a problem that tended to affect individual users rather than entire organizations.

Ransomware today isn’t a nuisance, it’s a criminal business. These days it is delivered by hand, and it’s targeted at entire companies instead of individual computers. Cybercriminal gangs break in to an organization’s network and may stay there for months before finally wreaking havoc. Before the wreaking, the group performing the attack want to maximise the chances of their attack succeeding. They do that by turning themselves into users with the power to turn off the victim company’s antivirus software, if they can.

2. Malware

Malware (malicious software) is a possible second culprit as to why your AV is turned off for some reason.

No surprise here.

Malware and antivirus similarly dislike each other. Normally, the only way they can co-exist on your computer is if the former is in quarantine. Malware authors know this, which is why some of them have successfully kitted their malicious software to try to disable, if not completely uninstall, antivirus on any computers it infects. With AV out of the way, the malware is free to harm any systems it’s on, as it was programmed to.

We see this capability most often in Trojan malware, malicious software that pretends to be something important—like an update to a program you use—but does insidious things on your computer when run. LemonDuck, an advanced cryptominer, is an example of a Trojan programmed to try to uninstall antivirus.

Although the hackers who run ransomware will often try to disable antivirus manually (as we said in the first section), some ransomware also has the ability to disable antivirus programmed in to it, including MegaCortex, PYSA, Ragnar Locker, and REvil.

3. Insiders (friends and family)

We often write about insider threats on this site—individuals who, often unknowingly, put their employer at risk. Believe it or not, insider threats exist at home too (for lack of a better term, we will stick to calling them “insiders” here).

Who might these be, you ask? They could be your kids, other family members who live with you, or—perhaps in certain circumstances—an insider could be you.

Which begs the question: Why would any of them turn off your antivirus? Often because they are erroneously advised to.

“Back in the earlier days of gaming, it was common to see antivirus programs quarantine game files,” says Chris Boyd, Lead Malware Intelligence Analyst for Malwarebytes.

“PCs with limited system resources would find the strain of games running alongside security programs a bit too much to handle. As a result, ‘Turn off your antivirus’ became a common sight on gaming forums and in accepted wisdom generally.” And it wasn’t just gaming. “Turn off your AV” was often the first thing that you’d be asked to do if you phoned tech support or read the troubleshooting section of your manual, for any piece of software.

“These days, you should never have to turn off your security solutions in order to have a quick round of Fortnite or a long session of Elder Scrolls Online”, says Boyd.

Still, the bad advice persists.

steam call of the wild
A Steam player seeking help after turning off his AV and nothing still works.

Another reason people willingly disable AV is to stop antivirus alerts when they’re installing software on shared computers.

You might be smart enough to know that an antivirus warning is a bad sign, but if you’re not the only user of the computer, and the other users really want to install something, there’s a chance they’re simply going to see security pop-ups as a hinderance and shoot the messenger by turning off the AV.

So, while a mysteriously disabled antivirus can be the handy work of hackers and malware with bad intentions, the culprit is sometimes a lot closer to home, and the motive less nefarious.

However it happens, the result is the same: You are left unprotected. Your antivirus software isn’t going to do any good if it’s turned off, so keep it safe to keep yourself safe.

Stay safe!

The post The 3 biggest threats reaching for your antivirus software’s off switch appeared first on Malwarebytes Labs.