IT NEWS

Thermal cameras could help reveal your password

Thermal imaging cameras detect heat energy, a helpful tool for engineers when hunting for thermal insulation gaps in buildings. But did you know that such devices can now aid in password theft?

Because these devices are sold a lot cheaper than they used to, pretty much anyone can get their hands on them. And anyone with a thermal imaging device could be a potential password thief.

Researchers from the University of Glasgow’s School of Computing Sciences have developed a system, ThermoSecure, in order to demonstrate how these thermal imaging cameras can be used for “thermal attacks.”

In their paper, ThermoSecure: Investigating the effectiveness of AI-driven thermal attacks on commonly used computer keyboards, Dr. Mohamed Khamis, who led the development of ThermoSecure, Dr. John Williamson, and Norah Alotaibi, the authoring team, said: “Thermal cameras, unlike regular cameras, can reveal information without requiring the attacker to interact with the targeted victim, be present during the authentication attempt, or plant any tool that can be linked to the attacker which could potentially exposing [sic] them. Such information includes heat residues left by the user during authentication, which can be retrieved using thermal cameras.”

“Having acquired a thermal image of a keyboard or touchscreen after authentication, the attacker can then analyze the heat map and exploit it to uncover the entire password or pattern.”

Bright areas in a thermal image are heat imprints, indicating these were recently touched. While these are enough for the AI to determine someone’s password, two factors affect its accuracy level: (1) the password length and (2) heat trace age, or the time after authentication.

ThermoSecure perfectly guessed all 6-character passwords in the test, and successfully revealed 12-character passwords with 82% accuracy and 16-character passwords with 67% accuracy. 

As for heat trace age, on average, ThermoSecure successfully revealed passwords with 86 percent, 76 percent, and 62 percent accuracy when the image was taken 20 seconds, 30 seconds, and 60 seconds after authentication, respectively. The longer the heat trace age, the less accurate the AI was in guessing passwords.

“It’s important that computer security research keeps pace with these developments to find new ways to mitigate risk, and we will continue to develop our technology to try to stay one step ahead of attackers,” said Dr. Khamis in an interview with ZDNet.

He also advised how you can protect yourself from thermal attacks: Use strong passwords and, if possible, use biometric verification for added protection.

“Users can help make their devices and keyboards more secure by adopting alternative authentication methods, like fingerprint or facial recognition, which mitigate many of the risks of thermal attack.”

Fake tractor fraudsters plague online transactions

The agriculture sector has been under fire from digital attacks for some time now. The primary problem so far has been ransomware, and law enforcement recently warned that malware authors may be gearing up to time their attacks in this sector for maximum damage. The FBI highlighted that attacks occurred throughout both 2021 and 2022, including outbreaks of ransomware at multi-state grain companies. Conti, Suncrypt, BlackByte, and more also put in appearances at several grain cooperatives.

And now another issue for the agricultute sector: Sophisticated scams involving fake tractors and sale portals have cost certain businesses $1.2 million in the space of a month. Worryingly, the Australian Competition and Consumer Commission claims this is an increase of 20% versus the same period of time a year earlier.

From fake ad to fake tractor

As with so many internet scams, it begins with fake online adverts. These take the form of both fake websites and bogus ads placed on genuine advertising platforms. This Age article highlights some of the techniques used to reinforce the legitimacy of the ads, which includes:

  • Mock sale contracts. Fake documentation and identification is often the stomping ground for 419 and social engineering scams, so it makes sense it would put in an appearance here.
  • Listing ABNs on bogus websites. This is a way of making things look legitimate. An ABN entry is how you confirm a business is genuine, or at least exists. A valid record will display as active, next to the business name, type, and location. You can also click through and see additional data regarding trading names, active status, goods and services, and more. Scammers are likely including genuine business names in their ads without the actual owner knowing about it. This is going to cause reputational damage down the line.
  • Free trials after deposits are made. Making an offer sound better than it really is works where most scams are concerned. As the article notes, excuses will be made as to why in-person inspections can’t be arranged and any upfront payment should be treated with suspicion.

Don’t trade in your cash for a non-existent model

While these attacks are being flagged in Australia, the reality is that this kind of thing can happen anywhere. If you’re involved in agriculture, here are some of the ways you can avoid this from happening to you:

  • Inspect your purchase via video call or in person. If this isn’t possible, ask why.

  • Don’t pay anything upfront, especially if the seller claims it’s being done through an “escrow” service of some kind. Most likely it’s just something being operated by the scammer. Worth noting that they’re typically asking for 10-20% deposits, which could be a lot of money considering tractors are involved.

  • If the machinery you’re buying is below the market price in a way which makes you think it’s too good to be true, then it probably is.

  • Check with businesses supposedly close to the seller’s location and see if any of them know about the individual or business wanting to sell you something.

  • Counties often have a list or business register similar to Australia’s ABN. The UK has Companies House, where you can see businesses registered for tax purposes. There are several routes to go down if you’re in the US. None of this is a guarantee of legitimacy with regard to the entity you’re dealing with. It’s possible they may be misusing the name of a genuine business, so use publicly available information to contact that business directly and see if everything is on the level.

Stay safe out there!

Criminal group busted after stealing hundreds of keyless cars

Europol has disclosed an international operation in which 31 suspects were arrested, 22 locations were searched, and over one million Euros in criminal assets were seized. The organized criminal gang specialized in stealing French keyless cars.

Among the arrested were the software developers that created so-called automotive diagnostic solutions which allowed the criminals to replace the original software of the vehicles, allowing the doors to be opened and the ignition to be started without the actual key fob. Others include the software resellers and the actual car thieves who used the tool to steal vehicles.

The arrests were made by French, Latvian, and Spanish law enforcement agencies with the assistance of Europol. Europol said it’s supported the investigation since March 2022 by providing extensive analysis and the dissemination of intelligence packages to each of the affected countries.

Suspects

The fraudulent software duplicated the vehicles’ ignition keys in order to aid in the theft of the car. Marketed as an automotive diagnostic solution, the tool was able to replace the original software of the targeted vehicles without respecting the protocol and without the original key.

Details about the method the car thieves used are sparse (for understandable reasons), but what we could gather is that the developers ran a website—on a domain that has been seized—where they sold a package that included a tablet, connectors, and software. The software was constantly adapted and updated to counteract the measures implemented by companies to reinforce the security of their vehicles.

Stealing keyless cars

Europol said the gang focused on cars from two unnamed French car manufacturers, which probably means the developers found a vulnerability in the car’s firmware that allowed them to replace the original software.

Vulnerabilities in the keyless entry systems have been found in the firmware of other car manufactures. To thwart intercepting and replaying authentication codes, many modern cars rely on a rolling codes mechanism. This method was introduced to prevent replay attacks by providing a new code for each authentication of a remote keyless entry. But this method is not available for all brands and models, and some brands were found to be using predictable codes.

The Europol and Eurojust statements both say that the tools provided by the developers enabled criminals to replace the original software of the targeted vehicles. This indicates a very different methodology from intercepting and replaying authentication codes.

Mitigation

Now that law enforcement has found and disabled the source of the software it shouldn’t take too long to find out which method was used, and the car manufacturers should be able to make the necessary adjustments.

Updating your car’s firmware is usually not an easy job or one we recommend doing yourself. We would recommend checking with your local dealer whether one is available and needed. It usually requires a special device to be hooked up to a port hidden under your dashboard. Your dealer will have such a device and knows where to find the port.

Warning: “FaceStealer” iOS and Android apps steal your Facebook login

Earlier this month, security researchers from Meta found 400 malicious Android and iOS apps designed to steal user Facebook login credentials.

Such mobile malware, which Malwarebytes detects typically as Android/Trojan.Spy.Facestealer, usually arrives as an app disguised as a useful or entertaining tool. But before the app can be fully used, it asks users to login to their accounts, at which point their usernames and passwords are sent to the fraudsters.

Stolen credentials can be used to compromise Facebook accounts. From there, the criminals can harvest more data about the original account owner, message friends or family members and scam them, or use these accounts to promote the FaceStealer app (among other things).

Meta listed a short description of FaceStealer apps listed on both the Google Play Store and the Apple App Store:

  • Photo editors, including those that claim to allow you to “turn yourself into a cartoon”
  • VPNs claiming to boost browsing speed or grant access to blocked content or websites
  • Phone utilities such as flashlight apps that claim to brighten your phone’s flashlight
  • Mobile games falsely promising high-quality 3D graphics
  • Health and lifestyle apps such as horoscopes and fitness trackers
  • Business and ad management apps claiming to provide hidden or unauthorized features not found in official apps by tech platforms.

If the apps appear to have positive reviews, that’s because the developers are thought to be creating five-star reviews to bury the negative ones. This is a known social engineering tactic to entice users further to try an app.

FaceStealer has been around for a while. The apps disappear after making headlines, and then FaceStealer pops up again as a different app. And while some apps are reported or actively detected, many evade detection and end up on legitimate app stores.

“The industry, in general, has not been great at detecting these, and everyone is playing catch-up,” said Nathan Collier, Malwarebytes Senior Malware Intelligence Analyst for Android.

Meta said it is alerting Facebook users who may have inadvertently “self-compromised” themselves by using their Facebook credentials to use the malicious apps.

If you think you’ve entereed your Facebook credentials into a dodgy app, change your password immediately. Don’t reuse passwords you use on other accounts, and make sure you enable two-factor authentication (2FA) on your Facebook account. You can also let Facebook alert you of attempted log-ins to your account.

Finally, report all suspicious apps using Meta’s Data Abuse Bounty program.

How to spot a scam

Unfortunately, scams are a fact of life online. The virtual ties that bind us are international now: Our public telephone numbers, social media accounts, email addresses, messaging apps, dating profiles, and even our physical mailboxes, can all be reached by any criminal and con artist from anywhere in the world.

And test us they do, with everything from the preposterous offers of “Nigerian princes” to the slow boiling intimacy of long-term, long-distance romances.

There is a lot of good advice around (and plenty of it on this website) to help you understand which scams are popular right, how they work, and how to spot them.

Though undoubtedly useful, the advice is often specific to a single campaign or type of scam: Watch out for fake DHL emails; Beware of SMS messages from the Royal Mail; Don’t open invoices from unknown senders; Check the spelling and links in emails; Reverse image search too-good-to-be-true dating profile pics, and so on.

Being specific, the advice is narrow. SMS scams are not the same as email scams, and neither has much in common with a romance scam. There is a lot to remember.

So today I’m going to offer you something different. I want to give you the most general advice I can—a template that can be applied to almost any scam, over any media, on any time scale, whether it’s a new scam or something tried and tested.

It doesn’t make the other advice redundant, it’s just another way to look at things.

The advice comes from perhaps the most famous conman in the world, Frank Abagnale, whose alleged exploits were made famous by Leonardo DeCaprio in the movie “Catch me if you can”. Abagnale’s account of his own backstory is either true, partially true, or a total fabrication, depending on who you ask. What isn’t in doubt is that he knows a thing or two about lying to get what he wants.

In 2019 he gave an interview to CNBC in which he gives perhaps the best generalised advice about scams I’ve ever heard, and which I will repeat here.

In every scam no matter how sophisticated or how amateur, there are two red flags.

These are Abagnale’s red flags:

An urgent need for money

The end goal of all scams is to enrich the scammer. And that often involves a direct transfer of money, whether it’s entering credit card details into a fake website or wiring tens of thousands of dollars to a stranded lover.

The demand for money is almost always urgent. Scammers know that their requests don’t stack up, so they want you to rush, and they don’t want you to involve other people.

In a romance scam where the criminal hopes to make the victim fall in love with them, the scammer may take their time to begin. However, when the demand for money comes, it is likely to be urgent.

On a recent Lock and Code podcast, Cindy Liebes, Chief Cybersecurity Evangelist for the Cybercrime Support Network, spelled out just how patient these scammers can be:

“It can take months, it can take years, but invariably they will seek to get money.”

In other situations, such as business email compromise (BEC) scams, the urgency is immediate.

In a BEC scam an attacker spoofs the email account of a senior employee, such as a CEO, and tries to get a more junior employee to send them some of the company’s money.

Requests often come with a deadline and a demand for secrecy. The “CEO” concocts a story with one or more emails, messages or phone calls about needing help with an urgent, confidential deal. The scammer wants to isolate the employee from the company’s checks and balances, and their own common sense.

Underpinning it all is Abagnale’s first red flag: An urgent need for money.

Sometimes victims aren’t told to act urgently, they just want to. A few months ago we covered an Instagram scam in which victims thought they’d stumbled upon a website where they could see naked pictures of an attractive friend.

Instagran scam

The urgency here came from the viewer’s desire to act on a sexual impulse, and is reinforced by language like “LIMITED SLOTS ONLY, DON’T MISS OUT” and “What are you waiting for?”

The small print even explained the scam in plain terms—victims were being signed up for a premium rate subscription service—but the scammers were betting that victims would be in too much of a hurry to read it.

Asking for personal information

Abagnale’s second red flag is being asked for personal information. Personal information helps the scammer pretend to be you.

Sometimes it’s as simple as stealing your username and password with a fake website, so they can log in as you on the real website.

But it can also be very subtle. In his book The Art of Deception, infamous social engineer Kevin Mitnick describes how he would sometimes make several phone calls to build up the information he needed for a scam.

Each call would capture small details that improved his credibility for the next one. For example, one of Mitnick’s most famous crimes is stealing the source code for a popular Motorola phone in the early 1990s, an attack he described to Vice in 2019.

The attack began with a call to the main Motorola reception, which sent him back-and-forth on several more calls in which he learned the phone number of the VP of Motorola mobility, and that the company had a research centre in Arlington Heights.

This information allowed him to call the VP and credibly introduce himself as “Rick, over in Arlington Heights”, which was enough to convince them to give him the name and phone number of the phone’s project manager.

Mitnick then called the project manager and learned from her voicemail that she was on holiday, and who to contact while she was away. He called the project manager’s stand-in and convinced her that the project manager had not fulfilled a promise to send him the source code before she left on holiday.

Most of the conversations did not ask for enough sensitive information to alert the people he was talking to, but every one of them contained a request for something personal or privileged. Of course, when he finally asked for the source code, he was making a request for hugely privileged information, but he was able to create a plausible enough persona to pull it off.

In fact, the last victim was so convinced of “Rick”’s authenticity that she persuaded a security manager to hand over a username and password for the company’s proxy server, on his behalf.

Thankfully, most of us aren’t faced with a hacker as skilled as Mitnick, and few of us would be able to stop him if we were. Most cons are simpler, more direct versions of the same basic idea.

And that brings me to my final point.

Many scammers are professional criminals and scams are common because they work. It makes sense to prepare yourself as thoroughly as you can to spot them, but we all fall short sometimes. There is no shame in falling for a scam, and it isn’t your fault if you do.

A week in security (October 10 – 16)

Last week on Malwarebytes Labs:

Stay safe!

Android and iOS leak some data outside VPNs

Virtual Private Networks (VPNs) on Android and iOS are in the news. It’s been discovered that in certain circumstances, some of your traffic is leaked so it ends up outside of the safety cordon created by the VPN.

Mullvad, the discoverers of this Android “feature” say that it has the potential to cause someone to be de-anonymised (but only in rare cases as it requires a fair amount of skill on behalf of the snooper). At least one Google engineer claims that this isn’t a major concern, is intended functionality, and will continue as is for the time being.

MUL22-03

The Android discovery, currently named MUL22-03, is not the VPN’s fault. The transmission of data outside of the VPN is something which happens quite deliberately, to all brands of VPN, and not as the result of some sort of terrible hack or exploit. Although the full audit report has not yet been released, the information available so far may be worrying for some. According to the report, Android sends “connectivity checks” (traffic that determines if a connection has been made successfully) outside of whichever VPN tunnel you happen to have in place.

Perhaps confusingly, this also occurs whether or not you have “Block connections without VPN” or even “Always on VPN” switched on, which is (supposed) to do what you’d expect given the name. It’s quite reasonable to assume a setting which says one thing will not in fact do the opposite of that thing, so what is going on here?

The leakage arises as a result of certain special edge case scenarios, in which case Android will override the various “Do not do this without a VPN” settings. This would happen, for example, with something like a captive portal. A captive portal is something you typically access when joining a network—something like a hotspot sign-in page stored on a gateway.

Why? Because VPNs run on top of whatever Internet-connected network you are on, so you have to join a network before you can establish your VPN connection. Anything that happens before you establish your VPN connection can’t be protected by it.

As per Bleeping Computer, this leakage can include DNS lookups, HTTPs traffic, IP addresses and (perhaps) NTP traffic (Network Time Protocol, a protocol for synchronising net-connected clocks).

Mullvad VPN first reported this a documentation issue, and then asked for a way to “…disable connectivity checks while ‘Block connections without VPN’ (from now on lockdown) is enabled for a VPN app.”

Google’s response, via its issue tracker was “We do not think such an option would be understandable by most users, so we don’t think there is a strong case for offering this.”

According to Google, disabling connectivity checks is a non-starter for four reasons: VPNs might actually be relying on them; “split channel” traffic that doesn’t ever use the VPN might be relying on them; it isn’t just connectivity checks that bypass the VPN anyway; and the data revealed by the connectivity checks is available elsewhere.

The rest is a back and forth debate on the pros and cons of this stance, which is still ongoing. At this point, Google is not budging.

iOS has entered the chat

It seems this isn’t something only confined to Android. There are similar things happening on iOS 16, with multiple Apple services claimed to be leaking outside of the VPN tunnel including maps, health, and wallet.

According to Mysk, the traffic being sent to Apple isn’t insecure, it’s just going against what users expect.

All of the traffic that appeared in the video is either encrypted or double encrypted. The issue here is about wrong assumptions. The user assumes that when the VPN is on, ALL traffic is tunneled through the VPN. But iOS doesn’t tunnel everything. Android doesn’t either.

They suggest that one way forward to stop this from happening would be to treat VPN apps as browsers and “require a special approval and entitlement from Apple”.

There probably won’t be much movement on this issue until the release of the full report on MUL22-03, but for now the opinion from those involved in testing seems to be that the risk is small.

FBI, CISA warn of disinformation ahead of midterms

In less than four weeks, the balance of power in the US House of Representatives and Senate will be up for grabs, along with a host of gubernatorial seats, and positions at the state and municipal levels.

With everyone preparing to cast their ballots, the FBI and the Cybersecurity and Infrastructure Security Agency (CISA) have reminded people about the potential threat of disinformation.

Foreign actors may intensify efforts to influence outcomes of the 2022 midterm elections by circulating or amplifying reports of real or alleged malicious cyber activity on election infrastructure

It warns that forein actors may “create and knowingly disseminate false claims and narratives regarding voter suppression, voter or ballot fraud, and other false information intended to undermine confidence in the election processes and influence public opinion of the elections’ legitimacy.”

It’s not news that countries outside the US have engaged in disinformation operations before. And though we may immediately think of Russia, Iran, and China, it’s worth keeping the other 70-odd countries that are into disinformation campaigns in mind too.

Nation-backed threat actors use several methods to amplify fake narratives and false claims, incite anger, and mobilize angry voters. They use public online spaces, such as social media networks, they also use email, text messages, online journals and forums, spoof websites, and fake personas.

The agencies also warn that threat actors may claim they have successfully hacked or leaked election-related data, to sow distrust in the US system and undermine voter confidence. They also affirm that while threat actors might be making hay in the discourse that precedes elections, the actual election process have not been compromised.

No information suggesting any cyber activity against US election infrastructure has impacted the accuracy of voter registration information, prevented a registered voter from casting a ballot, or compromised the integrity of any ballots cast.

Americans are urged to examine both the information they receive, and its sources, with a critical eye, and to seek out reliable and verified news to share, react to, and discuss with others.

Potential election crimes, such as intentional disinformation about the manner, time, or
place of voting, should be reported to your local FBI Field Office, they say.

Android and Chrome start showing passwords the door

Google has announced that it’s bringing passkey support to both Android and Chrome. On May 5, 2022, it said it would implement passwordless support in Android and Chrome and the latest annoncement about passkeys is an important step in that journey.

Passkeys

Passkeys are a replacement for passwords. They are faster to sign in with, easier to use, and much more secure. Sounds good, right? So, why isn’t everybody using them already? Maybe because we do a bad job at explaining how easy they are.

Although they share four letters, passkeys are nothing like passwords. They use public-key cryptography, which requires a set of two cryptographic “keys”. One is public and one is private.

The public key is generated by the user and stored by whatever service the user is logging in to. When a user wants to log in, the service sends the user some data to “sign”, the user encryptes it with their private key and sends it back. The service then decrypts it with the public key. If the decryption works that’s proof that the owner of the private key signed the data and is therefore owner of the public key.

A user does not have to remember the public key or, heaven forbid, type it out in some form. That would only make matters worse. The public key also does not need to be kept a secret. Which means you don’t have to worry about data breaches, post-its, machine-in-the-middle attacks, or any other way it could be discovered or fall into the wrong hands, because the wrong hands are welcome to it: It is useless to them.

As long as your private key is safe, you are secure. And the private key stays on a device you own, such as a phone or hardware key, is never shared with anybody or any thing, and never leaves your possession. It’s job is to prove that the public key is really yours.

Authenticators

So, your private key is something you hold on to, but where do you keep it, what actually does the signing with it, and how is it secured? All of this happens on devices called “authenticators”.

An authenticator is a device that knows how to create and share the public key, knows how to store private keys, and knows how to use them to sign things. Authenticators can be hardware keys, phones, laptops, or any other kind of computing device. Best of all, authenticators can be a separate device from the one you’re logging in on. So you can log in to a website on your laptop and use a phone paired with your laptop as the authenticator.

Since passkeys are built on industry standards, this works across different platforms and browsers—including Windows, macOS, iOS, and ChromeOS. An Android user can sign in to a passkey-enabled website using Safari on a Mac, and a Windows user can do the same using a passkey stored on their iOS device.

Before an authenticator will share a public key or sign you into a site you have to authorise it to do so using a “gesture”. What constitutes a gesture is deliberately vague: It could be a button press, it could be a succesful Windows Hello face recognition, entering a PIN, or pressing a finger on your phone’s fingerprint sensor.

What’s important to remember here is that the gesture does not get sent to the website, it just permits the authenticator to do its work. So, if your authenticator uses a fingerprint scanner there is no need to worry your fingerprints will get sent to the website, exposed in a breach and re-used on a crime scene. Whether it’s a fingerprint, a facial scan, or anything else, the website knows nothing about the gesture at all.

Lost passkeys

Now your greatest worry is probably—what happens if I lose my private key or the device it’s on? This is where Google’s announcement comes in. (In my eagerness to explain, I almost forgot to tell you what it was exactly that Google announced.)

The announcement is:

  • Users can create and use passkeys on Android devices, which are securely synced through the Google Password Manager.
  • Developers can build passkey support on their sites for end-users using Chrome via the WebAuthn API, on Android and other supported platforms.

Passkey synchronization makes it very hard to lose your private key: Passkeys are recoverable even in the event that all associated devices are lost.

This is similar to Apple’s ability to recover a keychain. To do so, a user must authenticate with their iCloud account and password and then respond to an SMS sent to their registered phone number. With the keychain in hand, passkeys can be recovered through iCloud keychain escrow.

Shift of responsibility

For years the responsibility for safe authentication has been put in the wrong hands: Users’. Since we all know that the strength of a chain is never greater than that of the weakest link, we’ve been trying to improve the strength of that link. Sometimes by educating users, or yelling at them, even lying to them, or anything else that we thought could invoke a more responsible use of passwords.

What we haven’t done, or at least not as loud, is wonder how threat actors got their hands on all these username-password combination they could use in credential stuffing attacks. The answer was breaches. Asking a visitor to come up with a unique and secure password and then having thousands or even millions of them stolen doesn’t make the user feel any better about password security, does it now?

If you will allow me another analogy: In the past we sent a canary down into the mines to warn the miners if the carbon monoxide level was too high. The gases would kill the canary before killing the miners, thus providing a warning to exit the tunnels immediately. To improve that method, we didn’t start breeding stronger canaries, we improved the methods of detecting toxic gasses.

Password less future

For years we’ve been asking when we can get rid of passwords for good? Not yet, but this is a step closer. Now that it is available, we just have to get everyone on board.

The good news is that every modern browser already knows how to handle their part, by supporting the WebAuthn standard, so all we need now is for websites and other online resources to support it, and for vendors to create compatible authenticators.

Last year Microsoft announced that as of September 15, 2021 you can completely remove the password from your Microsoft account and use the Microsoft Authenticator app, Windows Hello, a security key, or a verification code sent to your phone or email to sign in to Microsoft apps and services. Together with Google and Microsoft, Apple committed to expanded support for FIDO standard to accelerate availability of password less sign-ins.

Let us know in the comments whether you agree that a better understanding of how passkeys work will make the transition go faster.

Introducing Malwarebytes Managed Detection and Response (MDR)

With our Managed Detection and Response (MDR) service now generally available for businesses and MSPs, you may be wondering: What is MDR, how does Malwarebytes MDR work, and do I need it?

Underpinned by our award-winning EDR technology, Malwarebytes MDR offers powerful and affordable threat prevention and remediation services, provided by a team of cybersecurity experts that remotely monitors your network 24/7 to detect, analyze, and prioritize threats.

Learn more about Malwarebytes MDR 

Malwarebytes MDR

MDR is a service that provides proactive, purpose-built threat hunting, monitoring, and response capabilities powered by a team of advanced cybersecurity technicians, combined with the analysis of robust correlated data. It takes the guesswork out of your most complex cybersecurity threats by delivering 24/7 threat detection, rapid alerts, prevention, and remediation.

Malwarebytes MDR defends your network every day and all night, safeguarding your data, reputation, and finances with always-on dedicated protection.

While it’s technically possible for SMBs to build out their own MDR program in-house, doing so is a time, expense, and effort equivalent to starting an entirely new IT security department. You’ll need to build out your own SOC facilities, hire a minimum of five full-time employees to provide 24/7 coverage, and so on. That’s why many SMBs opt to outsource their MDR to a service provider.

Our experts are your experts: With Malwarebytes MDR, our team of cybersecurity professionals acts as an extension to your security team, ensuring that you have the staff, skill, and experience you need to maximize your cybersecurity posture on a 24/7 basis.

easset upload file28003 241144 e
Malwarebytes MDR

Malwarebytes MDR workflow

To recap, the basic workflow for Malwarebytes MDR goes like this:

  1. The Malwarebytes MDR team monitors and analyzes your system, checking for IOCs and threat hunting, and finds something malicious.

  2. Our MDR team sends you an email alerting you to the threat and asking you to go to the MDR portal in Nebula.

  3. You log into Nebula and click on the MDR portal in the upper-righthand corner.

  4. In the main portal view you can see a basic log of everything that the analysts have done on that specific system. Click “Go to Case” for more details on specific threats.

  5. Clicking “Go to Case” will bring you back to Nebula for whatever suspicious activity or alert that the MDR team needs you to remediate.

  6. You do the remediation, go back to the MDR portal, and tell the MDR team that you’ve completed it.

  7. The MDR team closes out the alert.

How it works

Malwarebytes MDR
Malwarebytes MDR workflow

It all starts with contextual enrichments. EDR alerts are enriched with context from threat intelligence feeds:

  1. Customer telemetry data from all deployed Malwarebytes products ingested.

    1. EDR (including Brute Force Protection) and Cloud Security Modules

  2. Threat intelligence feeds from multiple sources ingested

    1. Premium external threat feeds

    2. Internal Malwarebytes feeds including crowd-sourced intelligence from the entire Malwarebytes customer base (B2B and Consumer)

    3. Open-source feeds

  3. Telemetry data and threat intelligence correlated with alert

    1. Generates additional context to the alert (e.g., more clues to the behavior and origin)

The MDR Analyst Team monitors endpoint alerts 24×7 to field incoming alerts:

  1. Artifacts of alert rapidly reviewed and prioritized for triage

    1. Automations sift through the artifacts (processes, actions, etc) to identify most interesting

  2. Case opened on each artifact requiring triage

    1. Notification provided to customer within MDR Portal

  3. Case analyzed by MDR Analyst team

    1. Deep analysis and review leveraging enriched alerts

    2. Escalation to Tier 3 analysts, 2nd opinions within the team

  4. ‘Best course of action’ decided and communicated

    1. MDR Analysts communicate one of two possible decisions via the customer portal:

      1. Customer verification of artifact required 

      2. Remediation required

Then comes the options for remediation:

  1. Malwarebytes managed 

    1. Malwarebytes automatically provides remediation by removing threats using EDR capabilities 

    2. Re-boot, re-imaging, and other onsite tasks will require customer involvement

  2. Collaborative

    1. Malwarebytes notifies customer who can authorize managed remediation or perform remediation themselves

    2. Work together to take care of it outside of biz hours, etc

  3. Manual (customer does it, guidance from MWB)

    1. Malwarebytes provides notification to customer with detailed guidance to perform remediation themselves

Finally, for case closure:

  1. Closure notification to customer within the MDR portal

  2. History of closed cases available for compliance and reporting needs

    1. Case event details available to customer

Want to learn more?

If you want to know more about MDR and if it’s right for you, check out these resources: