IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Vulnerable WordPress plugin leaves online shoppers vulnerable

The most popular web content management system (CMS) is WordPress, which is used by more than 30% of all websites. By extension, the most popular ecommerce platform in the world is WooCommerce, a plugin that turns a WordPress website into an online shop. In fact, WooCommerce is so popular that it isn’t just part of WordPress’s software ecosystem, it also has a software ecosystem of its very own too.

There are hundreds of WordPress plugins that are designed to work with or extend the WooCommerce plugin in some way, and many of them are mature commercial software products in their own right. One such product is a popular extension called WooCommerce Dynamic Pricing and Discounts, which sells for a little less than $70 and has been purchased almost 20,000 times.

If your site is running that plugin, you need to update it to version 2.4.2 immediately.

Researchers recently discovered multiple security vulnerabilities affecting version 2.4.1 and below. These vulnerabilities have been fixed in version 2.4.2, which was released on August 22, 2021.

The vulnerabilities

The first vulnerability is a high-severity stored cross-site scripting (XSS) bug. Cross-site scripting (XSS) is a type of security vulnerability that lets attackers inject client-side scripts into web pages viewed by other users.

The researchers found that the vulnerable code missed two important checks: A capability check that ensures a user is authorized to do a particular thing, and a security nonce (short for “number once”) that tries to ensure a web request is asked and answered by the same site, and that the request didn’t come from an imposter running a cross-site request forgery (CSRF) attack.

Without a capability check the vulnerable function—which allowed users to import plugin settings—was available to anyone, including an attacker. Because some of the setting fields weren’t sanitized, an attacker could use the vulnerability to inject JavaScript code into the imported JSON-encoded file.

The second vulnerability exists in the plugin’s settings export functionality, which was also missing a capability check. In this case an unauthenticated attacker can export the plugin’s settings, inject JavaSript code into the resulting JSON file and then reimport the settings, including the malicious JavaScript, using the first vulnerability.

The possible consequences

JavaScript code can be used to perform all kinds of malicious activity, from stealing cookies to spreading malware. In this case it’s also possible to replace the JavaScript code with HTML tags, such as a Meta Refresh tag that could be used to redirect visitors to a malicious website for instance.

Because the code injected via the settings import into WooCommerce Dynamic Pricing and Discounts is run on every product page of a WooCommerce shop, it looks like an ideal vulnerability for credit card skimmers (malicious code that reads your credit card details when they are entered them into the checkout form).

As we reported last year, WooCommerce is increasingly being targeted by criminals, because of its large market share. We asked Jérôme Segura, Senior Director of Threat Intelligence at Malwarebytes, and an avid follower of skimmers, how groups that use them would react to vulnerabilities like these.

“Two common mistakes website owners often make is to leave their Content Management System (CMS) unpatched and believe they are not an interesting target. In many cases, users may choose not to apply security updates as they fear that it may introduce bugs or even break a website from loading properly. While this is true, it creates the perfect opportunity for online criminals to exploit known vulnerabilities on a large scale.

Magento, WooCommerce and several other CMSes are constantly being abused for a number of reasons. If your website does e-commerce, it becomes even more interesting as threat actors can not only target you but also your customers and their financial data in attacks such as Magecart.

Applying updates promptly is a necessity, and if for one reason of another it’s not possible, other solutions such as Web Application Firewalls exist to block known and unknown automated attacks.”

Mitigation

When using a CMS, and especially a popular one, you will have to keep an eye out for updates—for both the CMS itself and any plugins you have installed. Speed is important. Attackers are always aware of the latest vulnerabilities and will scan the Internet for unpatched sites to hijack, sometimes within hours of a patch being made available.

To do your online shopping safely it is advisable to take as many precautions as possible. There are browsers and browser configurations that will help you against falling victim to skimmers, malicious redirects, and other unwelcome code on a site you are visiting.

Stay safe, everyone!

The post Vulnerable WordPress plugin leaves online shoppers vulnerable appeared first on Malwarebytes Labs.

WhatsApp hit with €225 million fine for GDPR violations

WhatsApp was hit with a €225 million fine for violating the General Data Protection Regulation (GDPR), the European Union’s sweeping data protection law that has been in effect for more than three years.

The fine represents the highest ever penalty levied by the Irish Data Protection Commission, which serves as the primary data protection authority for WhatsApp and the messaging app company’s parent Facebook, which has its EU headquarters based in Ireland. It is also the second-highest penalty ever issued under GDPR violations. That higher penalty, sent to Amazon by Luxembourg’s National Commission for Data Protection, was for a massive $886 million.

WhatsApp said it disagreed with the Irish Data Protection Commission’s (DPC) findings, which were based on an investigation which began in December 2018, into whether WhatsApp failed to transparently tell both users and non-users about how their data was handled.

“We have worked to ensure the information we provide is transparent and comprehensive and will continue to do so,” WhatsApp said in response to the penalty. “We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate.”

Interestingly, the Irish DPC said that, when it shared its findings with other EU member-states’ own data regulators, eight of those regulators disagreed. During a follow-on dispute resolution process, the Irish DPC was told that it should actually increase its initial penalty amount.

Max Schrems, the legal activist who has proven himself to possibly be the largest thorn in Facebook’s side, welcomed the Irish DPC’s decision, but warned about the likely prolonged legal battle ahead, as WhatsApp will probably fight the penalty in court.

“In the Irish court system this means that years will pass before any fine is actually paid. In our cases we often had the feeling that the DPC is more concerned with headlines than with actually doing the hard groundwork,” Schrems wrote. “I can imagine that the DPC will simply not put many resources on the case or ‘settle’ with WhatsApp in Ireland. We will monitor this case closely to ensure that the DPC is actually following through with this decision.”

The Irish DPC said its investigation into WhatsApp began after it received several complaints from users and non-users after the passage of GDPR. In its final decision, the Irish DPC said it found that WhatsApp had failed to comply with several components of Articles 12, 13, and 14 of GDPR, which relate to how a company transparently tells its users and non-users about how their data is handled. In particular, the Irish DPC investigated whether WhatsApp was transparent about how it shared personal data with its parent company Facebook, and it slammed WhatsApp for keeping information either vague or behind too many separate FAQ and privacy policy pages.

“[T]he information that has been provided, regarding WhatsApp’s relationship with the Facebook Companies and the data sharing that occurs in the context of that relationship, is spread out across a wide range of texts and a significant amount of the information provided is so high level as to be meaningless,” the Irish DPC said. In a similar set of findings regarding WhatsApp’s data-sharing relationship with Facebook, the Irish DPC said “it is unsatisfactory that the user has to access information as to the identity of the Facebook Companies on Facebook’s website and for the information to be broken up over three or four different ‘articles’ that each link back to one another in a circular fashion. There is no reason why this information could not be hosted, in a concise piece of text, on WhatsApp’s website.”

Though WhatsApp disagreed with the Irish DPC’s findings overall, the data regulator’s claims of lacking transparency are not, by any means, new allegations. Just this year, WhatsApp walked itself into a firestorm when it scared users into thinking that their accounts would be deactivated if they refused to agree to a new privacy policy. The problem? It was two-fold, actually—user accounts would not be deactivated (they’d simply be egregiously stymied) and most of the privacy policy changes that users were upset about had actually already been put into place.

WhatsApp eventually walked back its threat to disable key features for users who refused to accept the new privacy policy—which it messaged as not-a-deactivation—but a great deal of damage had already been done. Users had already flocked to competitors in January, and there has been little indication that they’ve returned.  

The post WhatsApp hit with €225 million fine for GDPR violations appeared first on Malwarebytes Labs.

BrakTooth Bluetooth vulnerabilities, crash all the devices!

Security researchers have revealed details about a set of 16 vulnerabilities that impact the Bluetooth software stack that ships with System-on-Chip (SoC) boards from several popular vendors. The same group of researchers disclosed the SweynTooth vulnerabilities in February 2020. They decided to dub this set of vulnerabilities BrakTooth.

BrakTooth affects major SoC providers such as Intel, Qualcomm, Texas Instruments, Infineon (Cypress), Silicon Labs and others. Vulnerable chips are used by Microsoft Surface laptops, Dell desktops, and several Qualcomm-based smartphone models.

However, the researchers say they only examined the Bluetooth software libraries for 13 SoC boards from 11 vendors. However, looking further, they found that the same Bluetooth firmware was most likely used inside more than 1,400 chipsets, used as the base for a wide range of devices, such as laptops, smartphones, industrial equipment, and many types of smart “Internet of Things” devices.

It needs to be said that the impact is not the same for every type of device. Some can be crashed  by sending specially crafted LMP packets, which can be cured with a simple restart. Others can allow a remote attacker to run malicious code on vulnerable devices via Bluetooth Link Manager Protocol (LMP) packets—the protocol Bluetooth uses to set up and configure links to other devices.

Researchers believe the number of affected devices could be in the billions.

All the vulnerabilities

Full technical details and explanations for all 16 vulnerabilities can be found on the dedicated BrakTooth website where they are numbered V1 – V16 along with the associated CVEs. The researchers claim that all 11 vendors were notified about these security issues months ago (more than 90 days), well before they published their findings.

Expressif (pdf), Infineon, and Bluetrum have released patches. Despite having received the necessary information, the other vendors acknowledged the researchers’ findings but could not confirm a definite release date for a security patch, citing internal investigations into how each of the BrakTooth bugs impacted their software stacks and product portfolios. Texas Instruments said they would not be addressing the flaws impacting their chipsets.

CVE-2021-28139

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). The most serious vulnerability in BrakTooth has been listed under CVE-2021-28139, which allows attackers in radio range to trigger arbitrary code execution with a specially crafted payload.

While CVE-2021-28139 was tested and found to affect smart devices and industrial equipment built on Espressif Systems’ ESP32 SoC boards, the issue may impact many of the other 1,400 commercial products that are likely to have reused the same Bluetooth software stack.

Mitigation

The researchers emphasize the lack of basic tests in Bluetooth certification to validate the security of Bluetooth Low Energy (BLE) devices. The BrakTooth family of vulnerabilities revisits and reasserts this issue in the case of the older, but yet heavily used Bluetooth classic (BR/EDR) protocol implementations.

The advice to install patches and query your vendor about patches that are not (yet) available will not come as a surprise. We would also advise users to disable Bluetooth on devices that do not need it. This way you can prevent attackers from sending you malformed LMP packets. Since BrakTooth is based on the Bluetooth Classic protocol, an adversary would have to be in the radio range of the target to execute the attacks. So, in a safe environment Bluetooth can be enabled.

Stay safe, everyone!

The post BrakTooth Bluetooth vulnerabilities, crash all the devices! appeared first on Malwarebytes Labs.

Macs turn on apps signed by Symantec, treat them as malware

On August 23, following an update to Apple’s XProtect system—one of the security features built into macOS—some Mac users began to see security alerts about some of their apps, claiming that they “will damage your computer,” and offering users the option to “report malware to Apple.” This has led to much confusion online, and to an influx of requests in our support system asking about this malware. The most common so far has been from an app named ReceiverHelper.

"ReceiverHelper" will damage your computer.Report malware to Apple to protect other users
An Apple XProtect alert about ReceiverHelper

Is ReceiverHelper malware?

If you’re one of the affected folks, the good news is that this isn’t malicious at all. It is a component of Citrix, which is legitimate software made by the company of the same name. Not all Citrix software is being flagged as malicious, fortunately. Only some older versions of the software are causing problems.

Of course, if you thought that this was malware, we’d have to forgive you. Not only is macOS apparently saying that it is, but the name is highly suspicious. There has been a fair bit of Mac adware going around lately with odd two-word names, like StandardBoost or ActivityInput. All of these adware names are pretty generic, revealing nothing about what they’re actually supposed to be doing. Unfortunately, the name “ReceiverHelper” fits right in.

ReceiverHelper is not alone. There are a few other apps acting up. Among them are two other Citrix apps, ServiceRecords and AuthManager_Mac. (It’s almost like Citrix is trying to make its apps sound shady!) Other companies are also seeing an impact to older apps, such as AnyConnect’s vpnagentd.

What’s causing the warnings?

As was the case with a similar issue affecting HP printers last year, it’s all about code signing. What is code signing, you ask? In short, it’s a cryptographic way to validate that an app has not been tampered with. If an app is signed by the company that created it then you can be sure you’re using an unadulterated version of the software. Code signing is a really important security feature, and all apps really ought to be signed. If they’re not, they can’t be considered 100% safe. (For a primer in code signatures and certificates, see our previous coverage of the HP incident.)

In simple terms, code signing relies on a chain of trust: Signing is performed using a secret key. An organization proves its ownership of that secret key using a digital certificate, and that certificate’s authenticity is vouched for by a certificate authority (CA).

In the HP incident, HP revoked the certificate it used to sign a lot of its printer software. The HP software on people’s Macs didn’t change but the chain of trust that vouched for it was broken, so it began to trigger alerts as if it was malware.

This time around the chain of trust has been broken again, but the problem isn’t the certificates, it’s the CA that vouches for the certificates.

A CA is an trusted organization that issues certificates. In the case of Mac apps, you’re really supposed to get your certificates directly from Apple. However, not everyone does, and some companies will use certificates obtained from third parties to sign their apps.

Citrix did exactly this, and the decision has come back to haunt them. It turns out, they made a really poor choose of CA to obtain their certificate from: Symantec.

What’s wrong with Symantec?

A few years ago, Symantec offered CA services. However, Symantec CA played a bit fast and loose with the rules, which is never good for a CA. An important part of being a certification authority is trust, and Symantec made some big mistakes as a CA. Those mistakes led to an investigation, and what was found was highly concerning.

As a result, it was widely agreed that trust for Symantec certificates should be gradually phased out. The slow process of distrusting Symantec certificates began in 2018.

On August 23, 2021, Apple pushed out an update for XProtect that, among other things, rejects any code signed with certificates issued by Symantec. The Gatekeeper process in macOS will reject any apps signed with such a certificate, showing the infamous “will damage your computer” message.

For those technically inclined and in possession of one of the affected apps, you can verify this yourself with the codesign and spctl commands in the Terminal:

% codesign --verify --verbose /usr/local/libexec/ReceiverHelper.app
/usr/local/libexec/ReceiverHelper.app: valid on disk
/usr/local/libexec/ReceiverHelper.app: satisfies its Designated Requirement

% spctl -a /usr/local/libexec/ReceiverHelper.app
/usr/local/libexec/ReceiverHelper.app: rejected

The codesign command shows that the code signature is still valid—meaning that the app hasn’t been tampered with and the certificate hasn’t been revoked. However, the spctl command, which checks the file with Gatekeeper, shows that it is rejected, and thus will not be allowed to run.

How do I fix these issues?

The best fix is to simply remove or update the affected software. Unfortunately, we can’t help you with that. We’re good at removing malware here at Malwarebytes, but that’s not what this is. You’ll need to find out from the vendor of the affected software how to remove or replace it. For Citrix software, we recommend contacting Citrix support. (Unfortunately, we’ve gotten some reports that Citrix support is turning folks away if they don’t have active accounts, so you may need to be persistent.)

We do know that the affected Citrix apps (that we know about) are located at the following path:

/usr/local/libexec/

Why there? Excellent question… I have no idea. It’s not the right place for these things on macOS. Deleting ReceiverHelper, ServiceRecords, and AuthManager_Mac from this location may solve your problem. It also may cause other problems, as that wouldn’t be a complete uninstallation. You do this at your own risk and we suggest that you treat it as a method of last resort.

Avoid scams!

Unfortunately, if you type something like “remove ReceiverHelper” into Google right now, you’re going to get a bunch of scam sites in the results. These sites purport to help you remove the software, but in reality the instructions are automatically generated. The goal of these sites is to rank high on search results, call whatever the user was searching for malware (ReceiverHelper, et al, in this case), then promote some junk software to folks who visit and find they’re having trouble with the (nonsensical) instructions.

When you’re having a problem like this, Google and other search engines can be your worst enemy. Instead, consider asking on the Malwarebytes forums, Apple’s forums, or similar places, to get better advice.

The post Macs turn on apps signed by Symantec, treat them as malware appeared first on Malwarebytes Labs.

Google Play sign-ins can be abused to track another person’s movements

Even people that have been involved in cybersecurity for over 20 years make mistakes. I’m not sure whether that is a comforting thought for anyone or whether everyone should be worried now. But it is what it is and I make it a habit of owning my mistakes. So here goes.

With the aid of Google I was able to “spy” on my wife’s whereabouts without having to install anything on her phone.

In my defense, this whole episode happened on an operating system that I am far from an expert on (Android), and I was trying to be helpful. But what happened was unexpected.

What happened?

I installed an app on my wife’s Android phone and to do so, I needed to log into my Google account because I paid for the app. All went well, but after installing the app and testing whether it worked, I forgot to log out of Google Play. Silly, I know, but there you have it.

As it happens, at the time I installed the app on my wife’s phone I was investigating how much information the Google Maps Timeline feature was gathering about me. The timeline is an often-overlooked Google feature that “shows an estimate of places you may have been and routes you may have taken based on your Location History”. I was curious to see what Google records about me, even though I never actively check in or review places.

I started noticing strange things but couldn’t quite put my finger on what was going on. It showed me places I had been near, but never actually visited. I figured this was nothing more than Google being an over-achiever. But a few days ago I got my update and a place was listed that I had not even been near, but I knew my wife had been. Then, suddenly, it dawned on me: I was actually receiving location updates from my wife’s phone, as well as mine.

The only thing that might have alerted my wife to this unintentional surveillance—but never did—was my initial in a small circle at the top right corner of her phone, when she used the Google Play app. (You have to touch the icon to see the full details of the account that is logged in.)

After I logged out of Google Play on my wife’s phone the issue was still not resolved. After some digging I learned that my Google account was added to my wife’s phone’s accounts when I logged in on the Play Store, but was not removed when I logged out after noticing the tracking issue.

What needs to change?

I have submitted an issue report to Google, but I’m afraid they will tell me that it is a feature and not a bug.

There are a few things that Google could improve here:

The Google timeline was enabled on my phone, not on my wife’s, so I feel I should not have received the locations visited by her phone.

When I logged in under my account on her Google Play I got a “logged in from another device” warning. I feel there should have been something similar sent to her phone. Something along the lines of “someone else logged into Google Play on your phone.”

Google Play only shows the first letter of the Google account that is logged in.

PlayStore

Like I said, my wife never noticed, and it’s easy to imagine how even this small giveaway could be overcome by a malicious user.

Of course, a cynic might say that the fundamental obstacle here is that if your business model demands that you hoover up as much information about somebody as possible, the opportunities for this kind of unintentional, tech-enabled abuse are likely to increase.

Coalition Against Stalkerware

Malwarebytes, as one of the founding members of the Coalition against Stalkerware (CAS), does everything in its power to keep people safe from being spied on. But malware scanners are limited to finding apps that spy on the user and send the information elsewhere. In this case even TinyCheck would not be helpful as the information is not sent to a known, malicious server.

We should be very clear here, though. This situation is not a form of stalkerware, and it does not, by design, attempt to work around a user’s consent. This is more aptly a design and user experience flaw. However, it is still a flaw that can and should be called out, because the end result can still provide location tracking of another person’s device.

Eva Galperin, director of cybersecurity for Electronic Frontier Foundation, which is also a founding partner of the Coalition Against Stalkerware, told Malwarebytes Labs that this flaw actually showcases why it is so important for technology developers to take into account situations of domestic abuse when designing their products.

The flaw “does highlight the importance of quality assurance and user testing that takes domestic abuse situations into account and takes the leakage of location data seriously,” Galperin said. “One of the most dangerous times in a domestic abuse situation is the time when the survivor is trying to disentangle their digital life from their abusers’. That is a time when the survivors’ data is particularly vulnerable to this kind of misconfiguration problem and the potential consequences are very serious.”

Tech-enabled abuse

You may be thinking that with physical access to my wife’s phone I could have done a lot worse than this, including installing a spyware app. But this kind of abusive misuse of legitimate technology is common enough that it has a name: Tech-enabled abuse.

And, as one of my co-workers pointed out, people are often lazy when they deal with computers and they will often settle for the first thing they find that works. And this really is a low effort method of spying on someone’s whereabouts. Plus you do not need to install anything and there is only a minimal chance of being found out.

How to stop it

For now the only thing we can do is to check which accounts have been added to your phone. While this post talks about Google Maps location information, I’m pretty sure there will be other apps that are linked to your account rather than to your phone. Those apps could be queried for information by people other than the owner of the phone if they are logged into Google Play.

The instructions below can be slightly different for different versions of Android, but you will have an idea where to look for the added accounts.

Under Settings > Accounts and Backups > Manage Accounts I found my Google account listed. Click on the account you want to remove and you will see the option to do that. After removing my account from there on my wife’s phone the tracking issue was finally resolved.

The post Google Play sign-ins can be abused to track another person’s movements appeared first on Malwarebytes Labs.

FTC bans SpyFone and its CEO from continuing to sell stalkerware

Nearly two years after the US Federal Trade Commission first took aim against mobile apps that can non-consensually track people’s locations and pry into their emails, photos, and videos, the government agency placed restrictions Wednesday on the developers of SpyFone—which the FTC called a “stalkerware app company”—preventing the company and its CEO Scott Zuckerman from ever again “offering, promoting, selling, or advertising any surveillance app, service, or business.”

Wednesday’s enforcement action represents a much firmer stance from the FTC compared to the settlement it reached in 2019, when the government agency refrained from even using the term “stalkerware” and it focused more on lacking cybersecurity protections within the apps it investigated, not on the privacy invasions that were allowed.

FTC Commissioner Rohit Chopra, who made a separate statement on Wednesday, said much of the same.

“This is a significant change from the agency’s past approach,” Chopra said. “For example, in a 2019 stalkerware settlement, the Commission allowed the violators to continue developing and marketing monitoring products.”

That settlement prevented the company Retina-X Studios LLC and its owner, James N. Johns Jr., from selling their three Android apps unless significant security rehauls were made. At the time, critics of the settlement argued that the FTC was not preventing Retina-X from selling stalkerware-type apps, but that the FTC was preventing Retina-X from selling insecure stalkerware-type apps.

This time, the FTC spoke more forcefully about the threat that these apps present to overall privacy and their undeniable intersection with domestic violence, saying in a release that the “apps sold real-time access to their secret surveillance, allowing stalkers and domestic abusers to stealthily track the potential targets of their violence.”

In that same release Wednesday, Samuel Levine, Acting Director of the FTC’s Bureau of Consumer Protection said:

“SpyFone is a brazen brand name for a surveillance business that helped stalkers steal private information. The stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security. This case is an important reminder that surveillance-based businesses pose a significant threat to our safety and security. We will be aggressive about seeking surveillance bans when companies and their executives egregiously invade our privacy.”

The FTC’s enforcement against SpyFone will require the business—which is registered as Support King LLC—to also destroy any information that was “illegally collected” through its Android apps. It must also notify individuals whose devices were manipulated to run SpyFone apps, warning them that their devices both could have been monitored and may no longer be secure.

According to a complaint filed by the FTC which detailed its investigation into Support King, SpyFone, and Zuckerman, the company sold three versions of its SpyFone app (“Basic,” “Premium,” and “Xtreme”) at various prices. The company also sold “SpyFone for Android Xpress,” which the FTC described not as an app, but as an actual mobile device that came pre-installed with a one-year subscription for Android Xtreme. The price of the device started at $495.

The FTC also focused on the install methods for SpyFone’s apps, revealing that SpyFone required its users to subvert built-in cybersecurity protections on other mobile devices so to avoid detection by those devices’ operating systems. Certain functions advertised by SpyFone  also required extra manipulations by users, the FTC said.

“To enable certain functions of the SpyFone products, such as viewing outgoing email, purchasers must gain administrative privileges to the mobile device, such as through ‘rooting’ the mobile device, giving the purchaser privileges to install other software on the mobile device that the manufacturer would not otherwise allow,” the FTC said. “This access enables features of the SpyFone products to function, exposes a mobile device to various security vulnerabilities, and can invalidate warranties that a mobile device manufacturer or carrier provides.”

The FTC also found that SpyFone apps could hide themselves from view to their end-user—a telltale trait of apps that have been used to non-consensually track another user’s location and dig through their private messages and information.

The enforcement action also shows that the FTC is not strictly investigating the most popular or the most detected stalkerware-type apps on the market.

For example, Malwarebytes for Android detects the products made by SpyFone. Since the start of 2021 until yesterday, August 31, 2021, Malwarebytes detected these products a total of 334 times. The average detection count for the past six months is about 42 detections per month. These are comparatively low numbers when looking at similar apps, as our most-detected stalkerware-type apps have accrued roughly 4,000 detections since the start of 2021.

Malwarebytes also welcomes the news of the FTC’s enforcement and is excited for the agency’s new direction on this well-documented, pernicious threat to privacy.

The post FTC bans SpyFone and its CEO from continuing to sell stalkerware appeared first on Malwarebytes Labs.

ProxyToken: Another nail-biter from Microsoft Exchange

Had I known this season of Microsoft Exchange was going to be so long I’d have binge watched. Does anyone know how many episodes there are?

Sarcasm aside, while ProxyToken may seem like yet another episode of 2021’s longest running show, that doesn’t make it any less serious, or any less eye-catching. The plot is a real nail-biter (and there’s a shocking twist at the end).

This week’s instalment is called ProxyToken. It’s a vulnerability that allows an unauthenticated attacker to perform configuration actions on mailboxes belonging to arbitrary users. For example, an attacker could use the vulnerability to forward your mail to their account, and read all of your email. And not just your account. The mail for all your co-workers too. So there are multiple possible themes for this episode, including plain old data theft, industrial espionage, or just espionage.

Background and character development

Before we can explain this week’s plot, it’s important to catch up on some background information, and meet some of the principal players.

Exchange Server 2016 and Exchange Server 2019 automatically configure multiple Internet Information Services (IIS) virtual directories during installation. The installation also creates two sites in IIS. One is the default website, listening on ports 80 for HTTP and 443 for HTTPS. This is the site that all clients connect to for web access.

This front end website for Microsoft Exchange in IIS is mostly just a proxy to the back end. The Exchange back end listens on ports 81 for HTTP and 444 for HTTPS. For all post-authentication requests, the front end’s job is to repackage the requests and proxy them to corresponding endpoints on the Exchange Back End site. It then collects the responses from the back end and forwards them to the client.

Which is all good, if it weren’t for a feature called “Delegated Authentication” that Exchange supports for cross-forest topologies. An Active Directory forest (AD forest) is the top most logical container in an Active Directory configuration that contains domains, users, computers, and group policies. A single Active Directory configuration can contain more than one domain, and we call the tier above domain the AD forest. Under each domain, you can have several trees, and it can be tough to see the forest for the trees.

Forest trusts reduce the number of external trusts that need to be created. Forest trusts are created between the root domains of two forests. In such deployments, the Exchange Server front end is not able to perform authentication decisions on its own. Instead, the front end passes requests directly to the back end, relying on the back end to determine whether the request is properly authenticated. These requests that are to be authenticated using back-end logic are identified by the presence of a SecurityToken cookie.

The plot

For requests where the front end finds a non-empty cookie named SecurityToken, it delegates authentication to the back end. But, the back end is sometimes completely unaware that it needs to authenticate these incoming requests based upon the SecurityToken cookie, since the DelegatedAuthModule that checks for this cookie is not loaded in installations that have not been configured to use the special delegated authentication feature. With the astonishing end result that specially crafted requests can go through, without being subjected to authentication. Not on the front end nor on the back end.

The twist

There is one additional hurdle an attacker needs to clear before they can successfully issue an unauthenticated request, but it turns out to be a minor nuisance. Each request to an Exchange Control Pane (ECP) page is required to have a ticket known as the “ECP canary”. Without a canary, the request will result in an HTTP 500 response.

However, imagine the attacker’s luck, the 500 error response is accompanied by a valid canary! Which the attacker can use in his next, specially crafted, request.

The cliffhanger

This particular exploit assumes that the attacker has an account on the same Exchange server as the victim. It installs a forwarding rule that allows the attacker to read all the victim’s incoming mail. On some Exchange installations, an administrator may have set a global configuration value that permits forwarding rules having arbitrary Internet destinations, and in that case, the attacker does not need any Exchange credentials at all. Furthermore, since the entire ECP site is potentially affected, various other means of exploitation may be available as well.

Credits

The ProxyToken vulnerability was reported to the Zero Day Initiative in March 2021 by researcher Le Xuan Tuyen of VNPT ISC. The vulnerability is listed under CVE-2021-33766 as a Microsoft Exchange Information Disclosure Vulnerability and it was patched by Microsoft in the July 2021 Exchange cumulative updates.

Other “must watch” episodes

Microsoft Exchange has been riveting viewing this year, and with four months of the year to go it seems unlikely that ProxyToken is going to be the season finale. So here’s a list of this season’s “must watch” episodes (so far). If you’ve missed any, we suggest you catch up as soon as possible.

And remember, Exchange is attracting a lot of interest this year. Everyone’s a fan. All of these vulnerabilities are being actively scanned for and exploited by malware peddlers, including ransomware gangs.

The post ProxyToken: Another nail-biter from Microsoft Exchange appeared first on Malwarebytes Labs.

A week in security (August 23 – August 29)

Last week on Malwarebytes Labs:

Other cybersecurity news:

  • A vulnerability in Microsoft Azure left thousands of customer databases exposed. (Source: Reuters)
  • Researchers from vpnMentor discovered an insecure database belonging to EskyFun, a Chinese Android game developer, exposing millions of gamers to hacking. (Source: vpnMentor)
  • The UK will begin making changes to privacy laws as they depart from GDPR as part of post-Brexit proceedings. (Source: The Wall Street Journal)
  • China is reportedly hiring hackers to become spies and entrepreneurs at the same time. (Source: The New York Times)
  • Phishers used an XSS vulnerability in UPS’s official site to spread malware. (Source: BleepingComputer)
  • JP Morgan Chase bank customers were notified that their data was inadvertently exposed to other users. (Source: SecurityWeek)
  • ALTDOS is hacking companies in Southeast Asia to steal data and either ransom it back to them or sell for profit. (Source: The Record by Recorded Future)
  • Flaws in infusion pumps could let hackers increase medication dosage. (Source: WIRED)
  • Researchers for Zscaler revealed the prevalence of fake streaming sites and adware during the 2020 Tokyo Olympics. (Source: Zscaler Blog)
  • Bumble, a popular dating app, was leaking users’ exact locations until recently patched. (Source: IT News)

Stay safe, everyone!

The post A week in security (August 23 – August 29) appeared first on Malwarebytes Labs.

Hackers, tractors, and a few delayed actors. How hacker Sick Codes learned too much about John Deere: Lock and Code S02E16

No one ever wants a group of hackers to say about their company: “We had the keys to the kingdom.”

But that’s exactly what the hacker Sick Codes said on this week’s episode of Lock and Code, in speaking with host David Ruiz, when talking about his and fellow hackers’ efforts to peer into John Deere’s data operations center, where the company receives a near-endless stream of data from its Internet-connected tractors, combines, and other smart farming equipment.

For Sick Codes, what began as the discovery of a small flaw grew into a much larger group project that uncovered reams of sensitive information. Customer names, addresses, equipment type, equipment location, and equipment reservations were all uncovered by Sick Codes and his team, he said.

“A group of less than 10 people were able to pretty much get root on John Deere’s Operations Center, which connects to every other third party connectivity service that they have. You know, you can get every farms’ data, every farms’ water, I’m talking everything. We had like the keys to the kingdom. And that was just a few people in two days.”

Sick Codes

During their investigation, Sick Codes also tried to report these vulnerabilities to the companies themselves. But his and his team’s efforts were sometimes rebuffed. For one vulnerability, Sick Codes said, he was even pushed into staying quiet.

Listen to Sick Codes talk about his cyber investigation into agricultural companies, and his response to being led into a private disclosure program which he wanted nothing to do with, on this week’s episode of Lock and Code.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

You can also find us on Apple PodcastsSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.

Further, you can watch Sick Codes presentation at DEFCON on YouTube, and you can read a summary of the talk. The hackers who helped discover the vulnerabilities, which you can read about here, included:

The post Hackers, tractors, and a few delayed actors. How hacker Sick Codes learned too much about John Deere: Lock and Code S02E16 appeared first on Malwarebytes Labs.

Microsoft warns about phishing campaign using open redirects

The Microsoft 365 Defender Threat Intelligence Team posted an article stating that they have been tracking a widespread credential phishing campaign using open redirector links. Open redirects have been part of the phisher’s arsenal for a long time and it is a proven method to trick victims into clicking a malicious link.

What are open redirects?

The Mitre definition for “open redirect” specifies:

“An http parameter may contain a URL value and could cause the web application to redirect the request to the specified URL. By modifying the URL value to a malicious site, an attacker may successfully launch a phishing scam and steal user credentials. Because the server name in the modified link is identical to the original site, phishing attempts have a more trustworthy appearance.”

In layman’s terms, you click a link thinking you are going to a trustworthy site, but the link is constructed in a way so that it redirects you to another site, which in these cases is a lot less trustworthy. For instance, users that have been trained to hover over links in emails before clicking them may see a domain they trust and thus click it. After which they will be redirected and land somewhere unexpected. And if the phisher is any good, it will look as if the victim landed where they expected to land.

CAPTCHA

Another element this phishing campaign uses to gain the trust of the victim is adding Captcha verification to the phishing page. This is not uncommon. Researchers have found several new campaigns using legitimate challenge and response services (such as Google’s reCAPTCHA) or deploying customized fake CAPTCHA-like validation. Earlier research already showed there was an  increase of CAPTCHA-protected phishing pages. Hiding phishing content behind CAPTCHAs prevents crawlers from detecting malicious content and it even adds a legitimate look to phishing login pages.

After all CAPTCHA stands for the Completely Automated Public Turing test to tell Computers and Humans Apart. So it will try to keep the automated crawlers from security vendors and researchers out and only let “puny humans” in that are rife to be phished. I wrote try in that last sentence on purpose because there are several crawlers out there that are equipped with CAPTCHA solving abilities that outperform mine. And repeating the same CAPTCHA on several sites only makes it easier for those crawlers.

What the phishers also may not have realized, or bothered to think through, is that CAPTCHA uses a unique ID and if you start copying your CAPTCHA ID all over your phishing pages, it enables researchers to track your campaigns and it helps them to quickly find and identify your new phishing sites. Maybe even faster than it would normally take the security crawlers to find them.

Credential phishing

Credential phishing emails are usually a starting point for threat actors to gain a foothold in a network. Once the attacker manages to get hold of valid credentials they can try the credentials they have found rather than resort to brute-force attacks. In this campaign, Microsoft noticed that the emails seemed to follow a general pattern that displayed all the email content in a box with a large button that led to credential harvesting pages when clicked.

Once the victim has passed the CAPTCHA verification they are presented with a site that mimics the legitimate service the user was expecting. On this site they will see their email address already present and asking the user for their password. This technique is designed to trick users into filling out corporate credentials or other credentials associated with the email address.

If the user enters their password, the page refreshes and displays an error message stating that the page timed out or the password was incorrect and that they must enter their password again. This is likely done to get the user to enter their password twice, allowing attackers to ensure they obtain the correct password.

Once the user enters their password a second time, the page directs to a legitimate Sophos website that claims the email message has been released. This is another layer of social engineering to deceive the victim.

Recognizing the phish

Microsoft provides the reader with a lot of domains that are involved in this campaign, but for the recipient it is easier to recognize the format of the subject lines which might look like these:

  • [Recipient username] 1 New Notification
  • Report Status for [Recipient Domain Name] at [Date and Time]
  • Zoom Meeting for [Recipient Domain Name] at [Date and Time]
  • Status for [Recipient Domain Name] at [Date and Time]
  • Password Notification for [Recipient Domain Name] at [Date and Time]
  • [Recipient username] eNotification

Leading to sites (behind the CAPTCHA) pretending the recipient to log in to Zoom, Office 365, or other Microsoft services. The final domains used in the campaigns observed during this period mostly follow a specific domain-generation algorithm (DGA) pattern. Many of the domains hosting the phishing pages follow a specific DGA pattern:

  • [letter]-[letter][letter].xyz  (example: c-tl.xyz)
  • [letter]-[letter][letter].club (example: i-at.club)

One thing to remember, a password manager can help you against phishing. A password manager will not provide credentials for a site that it does not recognize, and while a phishing site might fool the human eye, it won’t fool a password manager. This helps users from getting their passwords harvested.

Stay safe, everyone!

The post Microsoft warns about phishing campaign using open redirects appeared first on Malwarebytes Labs.