IT NEWS

BrakTooth Bluetooth vulnerabilities, crash all the devices!

Security researchers have revealed details about a set of 16 vulnerabilities that impact the Bluetooth software stack that ships with System-on-Chip (SoC) boards from several popular vendors. The same group of researchers disclosed the SweynTooth vulnerabilities in February 2020. They decided to dub this set of vulnerabilities BrakTooth.

BrakTooth affects major SoC providers such as Intel, Qualcomm, Texas Instruments, Infineon (Cypress), Silicon Labs and others. Vulnerable chips are used by Microsoft Surface laptops, Dell desktops, and several Qualcomm-based smartphone models.

However, the researchers say they only examined the Bluetooth software libraries for 13 SoC boards from 11 vendors. However, looking further, they found that the same Bluetooth firmware was most likely used inside more than 1,400 chipsets, used as the base for a wide range of devices, such as laptops, smartphones, industrial equipment, and many types of smart “Internet of Things” devices.

It needs to be said that the impact is not the same for every type of device. Some can be crashed  by sending specially crafted LMP packets, which can be cured with a simple restart. Others can allow a remote attacker to run malicious code on vulnerable devices via Bluetooth Link Manager Protocol (LMP) packets—the protocol Bluetooth uses to set up and configure links to other devices.

Researchers believe the number of affected devices could be in the billions.

All the vulnerabilities

Full technical details and explanations for all 16 vulnerabilities can be found on the dedicated BrakTooth website where they are numbered V1 – V16 along with the associated CVEs. The researchers claim that all 11 vendors were notified about these security issues months ago (more than 90 days), well before they published their findings.

Expressif (pdf), Infineon, and Bluetrum have released patches. Despite having received the necessary information, the other vendors acknowledged the researchers’ findings but could not confirm a definite release date for a security patch, citing internal investigations into how each of the BrakTooth bugs impacted their software stacks and product portfolios. Texas Instruments said they would not be addressing the flaws impacting their chipsets.

CVE-2021-28139

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). The most serious vulnerability in BrakTooth has been listed under CVE-2021-28139, which allows attackers in radio range to trigger arbitrary code execution with a specially crafted payload.

While CVE-2021-28139 was tested and found to affect smart devices and industrial equipment built on Espressif Systems’ ESP32 SoC boards, the issue may impact many of the other 1,400 commercial products that are likely to have reused the same Bluetooth software stack.

Mitigation

The researchers emphasize the lack of basic tests in Bluetooth certification to validate the security of Bluetooth Low Energy (BLE) devices. The BrakTooth family of vulnerabilities revisits and reasserts this issue in the case of the older, but yet heavily used Bluetooth classic (BR/EDR) protocol implementations.

The advice to install patches and query your vendor about patches that are not (yet) available will not come as a surprise. We would also advise users to disable Bluetooth on devices that do not need it. This way you can prevent attackers from sending you malformed LMP packets. Since BrakTooth is based on the Bluetooth Classic protocol, an adversary would have to be in the radio range of the target to execute the attacks. So, in a safe environment Bluetooth can be enabled.

Stay safe, everyone!

The post BrakTooth Bluetooth vulnerabilities, crash all the devices! appeared first on Malwarebytes Labs.

Macs turn on apps signed by Symantec, treat them as malware

On August 23, following an update to Apple’s XProtect system—one of the security features built into macOS—some Mac users began to see security alerts about some of their apps, claiming that they “will damage your computer,” and offering users the option to “report malware to Apple.” This has led to much confusion online, and to an influx of requests in our support system asking about this malware. The most common so far has been from an app named ReceiverHelper.

"ReceiverHelper" will damage your computer.Report malware to Apple to protect other users
An Apple XProtect alert about ReceiverHelper

Is ReceiverHelper malware?

If you’re one of the affected folks, the good news is that this isn’t malicious at all. It is a component of Citrix, which is legitimate software made by the company of the same name. Not all Citrix software is being flagged as malicious, fortunately. Only some older versions of the software are causing problems.

Of course, if you thought that this was malware, we’d have to forgive you. Not only is macOS apparently saying that it is, but the name is highly suspicious. There has been a fair bit of Mac adware going around lately with odd two-word names, like StandardBoost or ActivityInput. All of these adware names are pretty generic, revealing nothing about what they’re actually supposed to be doing. Unfortunately, the name “ReceiverHelper” fits right in.

ReceiverHelper is not alone. There are a few other apps acting up. Among them are two other Citrix apps, ServiceRecords and AuthManager_Mac. (It’s almost like Citrix is trying to make its apps sound shady!) Other companies are also seeing an impact to older apps, such as AnyConnect’s vpnagentd.

What’s causing the warnings?

As was the case with a similar issue affecting HP printers last year, it’s all about code signing. What is code signing, you ask? In short, it’s a cryptographic way to validate that an app has not been tampered with. If an app is signed by the company that created it then you can be sure you’re using an unadulterated version of the software. Code signing is a really important security feature, and all apps really ought to be signed. If they’re not, they can’t be considered 100% safe. (For a primer in code signatures and certificates, see our previous coverage of the HP incident.)

In simple terms, code signing relies on a chain of trust: Signing is performed using a secret key. An organization proves its ownership of that secret key using a digital certificate, and that certificate’s authenticity is vouched for by a certificate authority (CA).

In the HP incident, HP revoked the certificate it used to sign a lot of its printer software. The HP software on people’s Macs didn’t change but the chain of trust that vouched for it was broken, so it began to trigger alerts as if it was malware.

This time around the chain of trust has been broken again, but the problem isn’t the certificates, it’s the CA that vouches for the certificates.

A CA is an trusted organization that issues certificates. In the case of Mac apps, you’re really supposed to get your certificates directly from Apple. However, not everyone does, and some companies will use certificates obtained from third parties to sign their apps.

Citrix did exactly this, and the decision has come back to haunt them. It turns out, they made a really poor choose of CA to obtain their certificate from: Symantec.

What’s wrong with Symantec?

A few years ago, Symantec offered CA services. However, Symantec CA played a bit fast and loose with the rules, which is never good for a CA. An important part of being a certification authority is trust, and Symantec made some big mistakes as a CA. Those mistakes led to an investigation, and what was found was highly concerning.

As a result, it was widely agreed that trust for Symantec certificates should be gradually phased out. The slow process of distrusting Symantec certificates began in 2018.

On August 23, 2021, Apple pushed out an update for XProtect that, among other things, rejects any code signed with certificates issued by Symantec. The Gatekeeper process in macOS will reject any apps signed with such a certificate, showing the infamous “will damage your computer” message.

For those technically inclined and in possession of one of the affected apps, you can verify this yourself with the codesign and spctl commands in the Terminal:

% codesign --verify --verbose /usr/local/libexec/ReceiverHelper.app
/usr/local/libexec/ReceiverHelper.app: valid on disk
/usr/local/libexec/ReceiverHelper.app: satisfies its Designated Requirement

% spctl -a /usr/local/libexec/ReceiverHelper.app
/usr/local/libexec/ReceiverHelper.app: rejected

The codesign command shows that the code signature is still valid—meaning that the app hasn’t been tampered with and the certificate hasn’t been revoked. However, the spctl command, which checks the file with Gatekeeper, shows that it is rejected, and thus will not be allowed to run.

How do I fix these issues?

The best fix is to simply remove or update the affected software. Unfortunately, we can’t help you with that. We’re good at removing malware here at Malwarebytes, but that’s not what this is. You’ll need to find out from the vendor of the affected software how to remove or replace it. For Citrix software, we recommend contacting Citrix support. (Unfortunately, we’ve gotten some reports that Citrix support is turning folks away if they don’t have active accounts, so you may need to be persistent.)

We do know that the affected Citrix apps (that we know about) are located at the following path:

/usr/local/libexec/

Why there? Excellent question… I have no idea. It’s not the right place for these things on macOS. Deleting ReceiverHelper, ServiceRecords, and AuthManager_Mac from this location may solve your problem. It also may cause other problems, as that wouldn’t be a complete uninstallation. You do this at your own risk and we suggest that you treat it as a method of last resort.

Avoid scams!

Unfortunately, if you type something like “remove ReceiverHelper” into Google right now, you’re going to get a bunch of scam sites in the results. These sites purport to help you remove the software, but in reality the instructions are automatically generated. The goal of these sites is to rank high on search results, call whatever the user was searching for malware (ReceiverHelper, et al, in this case), then promote some junk software to folks who visit and find they’re having trouble with the (nonsensical) instructions.

When you’re having a problem like this, Google and other search engines can be your worst enemy. Instead, consider asking on the Malwarebytes forums, Apple’s forums, or similar places, to get better advice.

The post Macs turn on apps signed by Symantec, treat them as malware appeared first on Malwarebytes Labs.

Google Play sign-ins can be abused to track another person’s movements

Even people that have been involved in cybersecurity for over 20 years make mistakes. I’m not sure whether that is a comforting thought for anyone or whether everyone should be worried now. But it is what it is and I make it a habit of owning my mistakes. So here goes.

With the aid of Google I was able to “spy” on my wife’s whereabouts without having to install anything on her phone.

In my defense, this whole episode happened on an operating system that I am far from an expert on (Android), and I was trying to be helpful. But what happened was unexpected.

What happened?

I installed an app on my wife’s Android phone and to do so, I needed to log into my Google account because I paid for the app. All went well, but after installing the app and testing whether it worked, I forgot to log out of Google Play. Silly, I know, but there you have it.

As it happens, at the time I installed the app on my wife’s phone I was investigating how much information the Google Maps Timeline feature was gathering about me. The timeline is an often-overlooked Google feature that “shows an estimate of places you may have been and routes you may have taken based on your Location History”. I was curious to see what Google records about me, even though I never actively check in or review places.

I started noticing strange things but couldn’t quite put my finger on what was going on. It showed me places I had been near, but never actually visited. I figured this was nothing more than Google being an over-achiever. But a few days ago I got my update and a place was listed that I had not even been near, but I knew my wife had been. Then, suddenly, it dawned on me: I was actually receiving location updates from my wife’s phone, as well as mine.

The only thing that might have alerted my wife to this unintentional surveillance—but never did—was my initial in a small circle at the top right corner of her phone, when she used the Google Play app. (You have to touch the icon to see the full details of the account that is logged in.)

After I logged out of Google Play on my wife’s phone the issue was still not resolved. After some digging I learned that my Google account was added to my wife’s phone’s accounts when I logged in on the Play Store, but was not removed when I logged out after noticing the tracking issue.

What needs to change?

I have submitted an issue report to Google, but I’m afraid they will tell me that it is a feature and not a bug.

There are a few things that Google could improve here:

The Google timeline was enabled on my phone, not on my wife’s, so I feel I should not have received the locations visited by her phone.

When I logged in under my account on her Google Play I got a “logged in from another device” warning. I feel there should have been something similar sent to her phone. Something along the lines of “someone else logged into Google Play on your phone.”

Google Play only shows the first letter of the Google account that is logged in.

PlayStore

Like I said, my wife never noticed, and it’s easy to imagine how even this small giveaway could be overcome by a malicious user.

Of course, a cynic might say that the fundamental obstacle here is that if your business model demands that you hoover up as much information about somebody as possible, the opportunities for this kind of unintentional, tech-enabled abuse are likely to increase.

Coalition Against Stalkerware

Malwarebytes, as one of the founding members of the Coalition against Stalkerware (CAS), does everything in its power to keep people safe from being spied on. But malware scanners are limited to finding apps that spy on the user and send the information elsewhere. In this case even TinyCheck would not be helpful as the information is not sent to a known, malicious server.

We should be very clear here, though. This situation is not a form of stalkerware, and it does not, by design, attempt to work around a user’s consent. This is more aptly a design and user experience flaw. However, it is still a flaw that can and should be called out, because the end result can still provide location tracking of another person’s device.

Eva Galperin, director of cybersecurity for Electronic Frontier Foundation, which is also a founding partner of the Coalition Against Stalkerware, told Malwarebytes Labs that this flaw actually showcases why it is so important for technology developers to take into account situations of domestic abuse when designing their products.

The flaw “does highlight the importance of quality assurance and user testing that takes domestic abuse situations into account and takes the leakage of location data seriously,” Galperin said. “One of the most dangerous times in a domestic abuse situation is the time when the survivor is trying to disentangle their digital life from their abusers’. That is a time when the survivors’ data is particularly vulnerable to this kind of misconfiguration problem and the potential consequences are very serious.”

Tech-enabled abuse

You may be thinking that with physical access to my wife’s phone I could have done a lot worse than this, including installing a spyware app. But this kind of abusive misuse of legitimate technology is common enough that it has a name: Tech-enabled abuse.

And, as one of my co-workers pointed out, people are often lazy when they deal with computers and they will often settle for the first thing they find that works. And this really is a low effort method of spying on someone’s whereabouts. Plus you do not need to install anything and there is only a minimal chance of being found out.

How to stop it

For now the only thing we can do is to check which accounts have been added to your phone. While this post talks about Google Maps location information, I’m pretty sure there will be other apps that are linked to your account rather than to your phone. Those apps could be queried for information by people other than the owner of the phone if they are logged into Google Play.

The instructions below can be slightly different for different versions of Android, but you will have an idea where to look for the added accounts.

Under Settings > Accounts and Backups > Manage Accounts I found my Google account listed. Click on the account you want to remove and you will see the option to do that. After removing my account from there on my wife’s phone the tracking issue was finally resolved.

The post Google Play sign-ins can be abused to track another person’s movements appeared first on Malwarebytes Labs.

FTC bans SpyFone and its CEO from continuing to sell stalkerware

Nearly two years after the US Federal Trade Commission first took aim against mobile apps that can non-consensually track people’s locations and pry into their emails, photos, and videos, the government agency placed restrictions Wednesday on the developers of SpyFone—which the FTC called a “stalkerware app company”—preventing the company and its CEO Scott Zuckerman from ever again “offering, promoting, selling, or advertising any surveillance app, service, or business.”

Wednesday’s enforcement action represents a much firmer stance from the FTC compared to the settlement it reached in 2019, when the government agency refrained from even using the term “stalkerware” and it focused more on lacking cybersecurity protections within the apps it investigated, not on the privacy invasions that were allowed.

FTC Commissioner Rohit Chopra, who made a separate statement on Wednesday, said much of the same.

“This is a significant change from the agency’s past approach,” Chopra said. “For example, in a 2019 stalkerware settlement, the Commission allowed the violators to continue developing and marketing monitoring products.”

That settlement prevented the company Retina-X Studios LLC and its owner, James N. Johns Jr., from selling their three Android apps unless significant security rehauls were made. At the time, critics of the settlement argued that the FTC was not preventing Retina-X from selling stalkerware-type apps, but that the FTC was preventing Retina-X from selling insecure stalkerware-type apps.

This time, the FTC spoke more forcefully about the threat that these apps present to overall privacy and their undeniable intersection with domestic violence, saying in a release that the “apps sold real-time access to their secret surveillance, allowing stalkers and domestic abusers to stealthily track the potential targets of their violence.”

In that same release Wednesday, Samuel Levine, Acting Director of the FTC’s Bureau of Consumer Protection said:

“SpyFone is a brazen brand name for a surveillance business that helped stalkers steal private information. The stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security. This case is an important reminder that surveillance-based businesses pose a significant threat to our safety and security. We will be aggressive about seeking surveillance bans when companies and their executives egregiously invade our privacy.”

The FTC’s enforcement against SpyFone will require the business—which is registered as Support King LLC—to also destroy any information that was “illegally collected” through its Android apps. It must also notify individuals whose devices were manipulated to run SpyFone apps, warning them that their devices both could have been monitored and may no longer be secure.

According to a complaint filed by the FTC which detailed its investigation into Support King, SpyFone, and Zuckerman, the company sold three versions of its SpyFone app (“Basic,” “Premium,” and “Xtreme”) at various prices. The company also sold “SpyFone for Android Xpress,” which the FTC described not as an app, but as an actual mobile device that came pre-installed with a one-year subscription for Android Xtreme. The price of the device started at $495.

The FTC also focused on the install methods for SpyFone’s apps, revealing that SpyFone required its users to subvert built-in cybersecurity protections on other mobile devices so to avoid detection by those devices’ operating systems. Certain functions advertised by SpyFone  also required extra manipulations by users, the FTC said.

“To enable certain functions of the SpyFone products, such as viewing outgoing email, purchasers must gain administrative privileges to the mobile device, such as through ‘rooting’ the mobile device, giving the purchaser privileges to install other software on the mobile device that the manufacturer would not otherwise allow,” the FTC said. “This access enables features of the SpyFone products to function, exposes a mobile device to various security vulnerabilities, and can invalidate warranties that a mobile device manufacturer or carrier provides.”

The FTC also found that SpyFone apps could hide themselves from view to their end-user—a telltale trait of apps that have been used to non-consensually track another user’s location and dig through their private messages and information.

The enforcement action also shows that the FTC is not strictly investigating the most popular or the most detected stalkerware-type apps on the market.

For example, Malwarebytes for Android detects the products made by SpyFone. Since the start of 2021 until yesterday, August 31, 2021, Malwarebytes detected these products a total of 334 times. The average detection count for the past six months is about 42 detections per month. These are comparatively low numbers when looking at similar apps, as our most-detected stalkerware-type apps have accrued roughly 4,000 detections since the start of 2021.

Malwarebytes also welcomes the news of the FTC’s enforcement and is excited for the agency’s new direction on this well-documented, pernicious threat to privacy.

The post FTC bans SpyFone and its CEO from continuing to sell stalkerware appeared first on Malwarebytes Labs.

ProxyToken: Another nail-biter from Microsoft Exchange

Had I known this season of Microsoft Exchange was going to be so long I’d have binge watched. Does anyone know how many episodes there are?

Sarcasm aside, while ProxyToken may seem like yet another episode of 2021’s longest running show, that doesn’t make it any less serious, or any less eye-catching. The plot is a real nail-biter (and there’s a shocking twist at the end).

This week’s instalment is called ProxyToken. It’s a vulnerability that allows an unauthenticated attacker to perform configuration actions on mailboxes belonging to arbitrary users. For example, an attacker could use the vulnerability to forward your mail to their account, and read all of your email. And not just your account. The mail for all your co-workers too. So there are multiple possible themes for this episode, including plain old data theft, industrial espionage, or just espionage.

Background and character development

Before we can explain this week’s plot, it’s important to catch up on some background information, and meet some of the principal players.

Exchange Server 2016 and Exchange Server 2019 automatically configure multiple Internet Information Services (IIS) virtual directories during installation. The installation also creates two sites in IIS. One is the default website, listening on ports 80 for HTTP and 443 for HTTPS. This is the site that all clients connect to for web access.

This front end website for Microsoft Exchange in IIS is mostly just a proxy to the back end. The Exchange back end listens on ports 81 for HTTP and 444 for HTTPS. For all post-authentication requests, the front end’s job is to repackage the requests and proxy them to corresponding endpoints on the Exchange Back End site. It then collects the responses from the back end and forwards them to the client.

Which is all good, if it weren’t for a feature called “Delegated Authentication” that Exchange supports for cross-forest topologies. An Active Directory forest (AD forest) is the top most logical container in an Active Directory configuration that contains domains, users, computers, and group policies. A single Active Directory configuration can contain more than one domain, and we call the tier above domain the AD forest. Under each domain, you can have several trees, and it can be tough to see the forest for the trees.

Forest trusts reduce the number of external trusts that need to be created. Forest trusts are created between the root domains of two forests. In such deployments, the Exchange Server front end is not able to perform authentication decisions on its own. Instead, the front end passes requests directly to the back end, relying on the back end to determine whether the request is properly authenticated. These requests that are to be authenticated using back-end logic are identified by the presence of a SecurityToken cookie.

The plot

For requests where the front end finds a non-empty cookie named SecurityToken, it delegates authentication to the back end. But, the back end is sometimes completely unaware that it needs to authenticate these incoming requests based upon the SecurityToken cookie, since the DelegatedAuthModule that checks for this cookie is not loaded in installations that have not been configured to use the special delegated authentication feature. With the astonishing end result that specially crafted requests can go through, without being subjected to authentication. Not on the front end nor on the back end.

The twist

There is one additional hurdle an attacker needs to clear before they can successfully issue an unauthenticated request, but it turns out to be a minor nuisance. Each request to an Exchange Control Pane (ECP) page is required to have a ticket known as the “ECP canary”. Without a canary, the request will result in an HTTP 500 response.

However, imagine the attacker’s luck, the 500 error response is accompanied by a valid canary! Which the attacker can use in his next, specially crafted, request.

The cliffhanger

This particular exploit assumes that the attacker has an account on the same Exchange server as the victim. It installs a forwarding rule that allows the attacker to read all the victim’s incoming mail. On some Exchange installations, an administrator may have set a global configuration value that permits forwarding rules having arbitrary Internet destinations, and in that case, the attacker does not need any Exchange credentials at all. Furthermore, since the entire ECP site is potentially affected, various other means of exploitation may be available as well.

Credits

The ProxyToken vulnerability was reported to the Zero Day Initiative in March 2021 by researcher Le Xuan Tuyen of VNPT ISC. The vulnerability is listed under CVE-2021-33766 as a Microsoft Exchange Information Disclosure Vulnerability and it was patched by Microsoft in the July 2021 Exchange cumulative updates.

Other “must watch” episodes

Microsoft Exchange has been riveting viewing this year, and with four months of the year to go it seems unlikely that ProxyToken is going to be the season finale. So here’s a list of this season’s “must watch” episodes (so far). If you’ve missed any, we suggest you catch up as soon as possible.

And remember, Exchange is attracting a lot of interest this year. Everyone’s a fan. All of these vulnerabilities are being actively scanned for and exploited by malware peddlers, including ransomware gangs.

The post ProxyToken: Another nail-biter from Microsoft Exchange appeared first on Malwarebytes Labs.

A week in security (August 23 – August 29)

Last week on Malwarebytes Labs:

Other cybersecurity news:

  • A vulnerability in Microsoft Azure left thousands of customer databases exposed. (Source: Reuters)
  • Researchers from vpnMentor discovered an insecure database belonging to EskyFun, a Chinese Android game developer, exposing millions of gamers to hacking. (Source: vpnMentor)
  • The UK will begin making changes to privacy laws as they depart from GDPR as part of post-Brexit proceedings. (Source: The Wall Street Journal)
  • China is reportedly hiring hackers to become spies and entrepreneurs at the same time. (Source: The New York Times)
  • Phishers used an XSS vulnerability in UPS’s official site to spread malware. (Source: BleepingComputer)
  • JP Morgan Chase bank customers were notified that their data was inadvertently exposed to other users. (Source: SecurityWeek)
  • ALTDOS is hacking companies in Southeast Asia to steal data and either ransom it back to them or sell for profit. (Source: The Record by Recorded Future)
  • Flaws in infusion pumps could let hackers increase medication dosage. (Source: WIRED)
  • Researchers for Zscaler revealed the prevalence of fake streaming sites and adware during the 2020 Tokyo Olympics. (Source: Zscaler Blog)
  • Bumble, a popular dating app, was leaking users’ exact locations until recently patched. (Source: IT News)

Stay safe, everyone!

The post A week in security (August 23 – August 29) appeared first on Malwarebytes Labs.

Hackers, tractors, and a few delayed actors. How hacker Sick Codes learned too much about John Deere: Lock and Code S02E16

No one ever wants a group of hackers to say about their company: “We had the keys to the kingdom.”

But that’s exactly what the hacker Sick Codes said on this week’s episode of Lock and Code, in speaking with host David Ruiz, when talking about his and fellow hackers’ efforts to peer into John Deere’s data operations center, where the company receives a near-endless stream of data from its Internet-connected tractors, combines, and other smart farming equipment.

For Sick Codes, what began as the discovery of a small flaw grew into a much larger group project that uncovered reams of sensitive information. Customer names, addresses, equipment type, equipment location, and equipment reservations were all uncovered by Sick Codes and his team, he said.

“A group of less than 10 people were able to pretty much get root on John Deere’s Operations Center, which connects to every other third party connectivity service that they have. You know, you can get every farms’ data, every farms’ water, I’m talking everything. We had like the keys to the kingdom. And that was just a few people in two days.”

Sick Codes

During their investigation, Sick Codes also tried to report these vulnerabilities to the companies themselves. But his and his team’s efforts were sometimes rebuffed. For one vulnerability, Sick Codes said, he was even pushed into staying quiet.

Listen to Sick Codes talk about his cyber investigation into agricultural companies, and his response to being led into a private disclosure program which he wanted nothing to do with, on this week’s episode of Lock and Code.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

You can also find us on Apple PodcastsSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.

Further, you can watch Sick Codes presentation at DEFCON on YouTube, and you can read a summary of the talk. The hackers who helped discover the vulnerabilities, which you can read about here, included:

The post Hackers, tractors, and a few delayed actors. How hacker Sick Codes learned too much about John Deere: Lock and Code S02E16 appeared first on Malwarebytes Labs.

Microsoft warns about phishing campaign using open redirects

The Microsoft 365 Defender Threat Intelligence Team posted an article stating that they have been tracking a widespread credential phishing campaign using open redirector links. Open redirects have been part of the phisher’s arsenal for a long time and it is a proven method to trick victims into clicking a malicious link.

What are open redirects?

The Mitre definition for “open redirect” specifies:

“An http parameter may contain a URL value and could cause the web application to redirect the request to the specified URL. By modifying the URL value to a malicious site, an attacker may successfully launch a phishing scam and steal user credentials. Because the server name in the modified link is identical to the original site, phishing attempts have a more trustworthy appearance.”

In layman’s terms, you click a link thinking you are going to a trustworthy site, but the link is constructed in a way so that it redirects you to another site, which in these cases is a lot less trustworthy. For instance, users that have been trained to hover over links in emails before clicking them may see a domain they trust and thus click it. After which they will be redirected and land somewhere unexpected. And if the phisher is any good, it will look as if the victim landed where they expected to land.

CAPTCHA

Another element this phishing campaign uses to gain the trust of the victim is adding Captcha verification to the phishing page. This is not uncommon. Researchers have found several new campaigns using legitimate challenge and response services (such as Google’s reCAPTCHA) or deploying customized fake CAPTCHA-like validation. Earlier research already showed there was an  increase of CAPTCHA-protected phishing pages. Hiding phishing content behind CAPTCHAs prevents crawlers from detecting malicious content and it even adds a legitimate look to phishing login pages.

After all CAPTCHA stands for the Completely Automated Public Turing test to tell Computers and Humans Apart. So it will try to keep the automated crawlers from security vendors and researchers out and only let “puny humans” in that are rife to be phished. I wrote try in that last sentence on purpose because there are several crawlers out there that are equipped with CAPTCHA solving abilities that outperform mine. And repeating the same CAPTCHA on several sites only makes it easier for those crawlers.

What the phishers also may not have realized, or bothered to think through, is that CAPTCHA uses a unique ID and if you start copying your CAPTCHA ID all over your phishing pages, it enables researchers to track your campaigns and it helps them to quickly find and identify your new phishing sites. Maybe even faster than it would normally take the security crawlers to find them.

Credential phishing

Credential phishing emails are usually a starting point for threat actors to gain a foothold in a network. Once the attacker manages to get hold of valid credentials they can try the credentials they have found rather than resort to brute-force attacks. In this campaign, Microsoft noticed that the emails seemed to follow a general pattern that displayed all the email content in a box with a large button that led to credential harvesting pages when clicked.

Once the victim has passed the CAPTCHA verification they are presented with a site that mimics the legitimate service the user was expecting. On this site they will see their email address already present and asking the user for their password. This technique is designed to trick users into filling out corporate credentials or other credentials associated with the email address.

If the user enters their password, the page refreshes and displays an error message stating that the page timed out or the password was incorrect and that they must enter their password again. This is likely done to get the user to enter their password twice, allowing attackers to ensure they obtain the correct password.

Once the user enters their password a second time, the page directs to a legitimate Sophos website that claims the email message has been released. This is another layer of social engineering to deceive the victim.

Recognizing the phish

Microsoft provides the reader with a lot of domains that are involved in this campaign, but for the recipient it is easier to recognize the format of the subject lines which might look like these:

  • [Recipient username] 1 New Notification
  • Report Status for [Recipient Domain Name] at [Date and Time]
  • Zoom Meeting for [Recipient Domain Name] at [Date and Time]
  • Status for [Recipient Domain Name] at [Date and Time]
  • Password Notification for [Recipient Domain Name] at [Date and Time]
  • [Recipient username] eNotification

Leading to sites (behind the CAPTCHA) pretending the recipient to log in to Zoom, Office 365, or other Microsoft services. The final domains used in the campaigns observed during this period mostly follow a specific domain-generation algorithm (DGA) pattern. Many of the domains hosting the phishing pages follow a specific DGA pattern:

  • [letter]-[letter][letter].xyz  (example: c-tl.xyz)
  • [letter]-[letter][letter].club (example: i-at.club)

One thing to remember, a password manager can help you against phishing. A password manager will not provide credentials for a site that it does not recognize, and while a phishing site might fool the human eye, it won’t fool a password manager. This helps users from getting their passwords harvested.

Stay safe, everyone!

The post Microsoft warns about phishing campaign using open redirects appeared first on Malwarebytes Labs.

How to stay secure from ransomware attacks this Labor Day weekend

Labor Day weekend is just around the corner and, believe it or not, cybercriminals are likely just as excited as you are! 

Ransomware gangs have nurtured a nasty habit of starting their attacks at the least convenient times: When computers are idle, when employees who might notice a problem are out of the office, and when the IT or security staff who might deal with it shorthanded. 

They like to attack at night and at weekends, and they love a holiday weekend. 

Indeed, while many people are looking forward to catching up with friends and family this Labor Day weekend, cybercrime gangs are likely huddling, too, planning to attack somebody

On the last big holiday weekend, Independence Day, attackers using REvil ransomware celebrated with an enormous supply-chain attack on Kaseya, one of the biggest IT solutions providers in the US for managed service providers (MSPs). Threat actors used a Kaseya VSA auto-update to push ransomware into more than 1,000 businesses. 

Why out-of-office attacks work

Ransomware works by encrypting huge numbers of files on as many of an organization’s computers as possible. Performing this kind of strong encryption is resource intensive and can take a long time, so even if an organization doesn’t spot the malware used in an attack, its tools might notice that something is amiss. 

“You never think you’re gonna be hit by ransomware,” says Ski Kacoroski, a system administrator with the Northshore School District in Washington state. Speaking on Malwarebytes’ Lock & Code podcast, he told us about Northshore’s nighttime attack: “It was an early Saturday morning. I got a text from my manager saying ‘something is up’ … after a short while I realized that [a] server had been hit by ransomware. It took us several more hours before we realized exactly how much had been hit.” He added “We had some high CPU utilizations alert the night before when they started their attack, but most of us were already asleep by midnight.” 

Criminals taking advantage while employees are away for holidays, weekends, or simply because their shift is over, is a classic “when the cat’s away” opportunistic crime. 

Be prepared for holiday disruption

We reached out to Adam Kujawa, Malwarebytes’ resident cybersecurity evangelist, and asked what organizations can do to minimize the chance their holiday weekend will be disrupted.  

Do these before the holiday 

  • Run a deep scan on all endpoints, servers, and interconnected systems to ensure there are no threats lurking on those systems, waiting to attack! 
  • Once you know those systems are clean, force a password change a week or two out from the holiday, so any guessed or stolen credentials are rendered useless. 
  • Employ stricter access requirements for sensitive data, such as multi-factor authentication (MFA), Manager Authorization, and requiring a local network connection. Although this will make it a more difficult for employees (for a short amount of time), this will also make it significantly more difficult for attackers to traverse networks and gain access to unauthorized data. Once the holiday ends, you can revert these policies since you’ll have more eyes to watch out for threats. 
  • Provide guidance to employees on not posting about vacations and/or holiday plans on social media. 
  • Provide free—or free for a limited time—security software to employees to use on personal systems 
  • Ensure all remotely accessible connections(e.g. VPNs, RDP connections) are secured with MFA. 

Do these during the holiday 

  • Ensure all non-essential systems and endpoints are shut down at the end of the day. 
  • Reduce risk by disabling or shutting down systems and/or processes which might be exploitable, if they aren’t needed. 
  • Ensure there is always someone watching the network during the holiday, and make sure they are equipped to handle a sudden attack situation. We suggest create a cyberattack reaction and recovery plan that includes call sheets, procedures on communicating with law enforcement and collecting evidence, and what systems can be isolated or shut down without seriously affecting the operations of the organization.

“The only mistake in life is a lesson not learned”

When we asked him why he came forward to tell his ransomware story when many others are reluctant to, Kacoroski told us: “The only mistake in life is a lesson not learned.” 

A lesson we can all learn from recent history is that cybercriminals are probably planning to ruin somebody’s Labor Day weekend. So don’t wait for an attack to happen to your organization before you decide you need to be ready. 

Prepare now, so you can enjoy an uninterrupted Labor Day weekend! 

The post How to stay secure from ransomware attacks this Labor Day weekend appeared first on Malwarebytes Labs.

US government and private sector agree to invest time, money in cybersecurity

In the wake of several high-profile ransomware attacks against critical infrastructure and major organizations in the last few months, President Biden met with private sector and education leaders to discuss a whole-of-nation effort needed to address cybersecurity threats and bolster the nation’s cybersecurity.

Several participants in President Biden’s meetings have recently announced commitments and initiatives:

  • The National Institute of Standards and Technology (NIST) will collaborate with industry and other partners to develop a new framework to improve the security and integrity of the technology supply chain.
  • The Biden Administration announced the formal expansion of the Industrial Control Systems Cybersecurity Initiative to a second major sector: natural gas pipelines.
  • Apple announced it will establish a new program to drive continuous security improvements throughout the technology supply chain.
  • Google announced it will invest $10 billion over the next five years to expand zero-trust programs, help secure the software supply chain, and enhance open-source security.
  • IBM announced it will train 150,000 people in cybersecurity skills over the next three years, and will partner with more than 20 Historically Black Colleges & Universities to establish Cybersecurity Leadership Centers to grow a more diverse cyber workforce.
  • Microsoft announced it will invest $20 billion over the next 5 years to accelerate efforts to integrate cyber security by design and deliver advanced security solutions. Microsoft also announced it will immediately make available $150 million in technical services to help federal, state, and local governments with upgrading security protection, and will expand partnerships with community colleges and non-profits for cybersecurity training.
  • Amazon announced it will make available to the public at no charge the security awareness training it offers its employees.

And those are just the big players. The full list can be found here.

The importance and relevance of each of these is discussed below.

Supply Chain

An important attack vector for ransomware that lead to some of the biggest and most costly attacks were supply chain attacks. While not new, these attacks are always interesting because they usually involve highly skilled attacks and make a lot of victims. A prime example of such a case is the MSP provider Kaseya.


You can listen to what went wrong, exactly, in Kaseya on our podcast Lock and Code, with guest Victor Gevers of the Dutch Institute for Vulnerability Disclosure, which found seven or eight zero-days in the product.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”


The Industrial Control Systems Cybersecurity Initiative

In April 2021, the Biden Administration launched an Industrial Control Systems Cybersecurity Initiative to strengthen the cybersecurity of critical infrastructure across the United States. The Electricity Subsector Action Plan was the first in a series of sector-by-sector efforts to safeguard the Nation’s critical infrastructure from cyber threats. Expanding to gas pipelines may have been prompted by the attack on Colonial Pipeline.

Security training

Organizations know that training employees on cybersecurity and privacy are not only expensive but time-consuming. Putting together a cybersecurity and privacy training program that is not only effective but sticks requires an incredible amount of time, effort, and thought in finding out employees’ learning needs, planning, creating goals, and identifying where they want to go.

For organizations to offer that kind of training for free to people outside of their own organization is a big commitment, but it is also hard to make that training effective. The more you know about the environment a student will be working in, the more targeted and effective the training can be.

This type of training can be broken down in a few layers:

  • Awareness which is not really training, but making people aware of what dangers are out there. A regular reader of our blog will have a high awareness level, or so we hope.
  • Actual training strives to produce relevant and needed security skills and competencies. But as we pointed out, that is hard to do without having specific knowledge about the working environment. What programs the trainees will be using is essential for targeted and effective training.
  • Education integrates all of the security skills and competencies of the various functional specialties into a common body of knowledge and strives to produce IT security specialists and professionals capable of vision and pro-active response. Which is a good thing, given the shortage of cybersecurity professionals, but that is not what I’m reading in the announcements.

We are glad about the initiatives and the amount of money and effort willing to be put into the initiatives. Some will certainly be more effective than others and we will certainly do our best to keep awareness levels high.

The post US government and private sector agree to invest time, money in cybersecurity appeared first on Malwarebytes Labs.