IT NEWS

Cardiologist moonlighted as successful ransomware developer

The US has charged a 55-year-old French-Venezuelan cardiologist from Venezuela with “attempted computer intrusions and conspiracy to commit computer intrusions”. This was revealed in an unsealed complaint in a federal court in Brooklyn, New York.

Moises Luis Zagala Gonzales worked as a ransomware developer on the side, renting out and selling ransomware tools to cybercriminals. He is known by many names—all related to his line of work—in the criminal underground: “Nosophoros” (Greek for “disease-bearer” or “diseased”), “Aesculapius” (Greek God of Medicine and Doctors), and “Nebuchadnezzar” (famed Babylonian king responsible for conducting the first recorded clinical trial in history).

US Attorney Breon Peace, who announced the charges, said:

“As alleged, the multi-tasking doctor treated patients, created and named his cyber tool after death, profited from a global ransomware ecosystem in which he sold the tools for conducting ransomware attacks, trained the attackers about how to extort victims, and then boasted about successful attacks, including by malicious actors associated with the government of Iran. Combating ransomware is a top priority of the Department of Justice and of this Office. If you profit from ransomware, we will find you and disrupt your malicious operations.”

Jigsaw v2 and Thanos are Zagala’s creations

Jigsaw made its first appearance in 2016. Initially called “BitcoinBlackmailer”, Jigsaw became a memorable ransomware strain in that it depicted Billy the Puppet, a macabre figure from the popular thriller franchise Saw.

malwarebytes jigsaw ransom note
The Jigsaw ransomware ransom note (Source: Marcelo Rivero | Malwarebytes)

Saw-inspired, Jigsaw puts pressure on victims to do what they’re told: Pay up now, or more of your files will be deleted every hour you delay. On top of this, it also has (in Zagala’s description) a “Doomsday” counter that counts the times a user attempts to terminate the ransomware.

“If the user kills the ransomware too many times, then it’s clear he won’t pay, so better erase the whole hard drive,” Zagala wrote about the tool.

The Thanos ransomware, Zagala’s second ransomware tool, was advertised as a “Private Ransomware Builder” in 2019. Presumably, he named it after a malevolent comic villain, who is based on “Thanatos”, the personification of death in Greek mythology.

malwarebytes thanos ransom note
The Thanos ransomware ransom note (Source: Marcelo Rivero | Malwarebytes)

Thanos allowed criminals to create their own unique ransomware strain, which they could then rent out to other criminals. Interested criminals could purchase a license for Thanos or join Zagala’s affiliate program, where he received a cut of the ransom payout.

The complaint alleged Zagala bragged that Thanos was “nearly undetected” by antivirus software. After encrypting all files, Thanos also deletes itself, making detection and recovery “almost impossible” for the victim.

MuddyWater, an Iranian APT, used Thanos ransomware to attack Israeli entities in September 2020. In June 2020, Hakbit, a Thanos offshoot, was used in attacks against pharmaceutical and healthcare sectors (among others) in Austria, Switzerland, and Germany.

“Malware analysts are all over me”

According to the FBI, Zagala began appearing online as “Nebuchadnezzar” because “malware analysts are all over me”.

Around May 3, 2022, law enforcement agencies conducted an interview with a relative of Zagala, who resides in Florida. Zagala used the PayPal account of this relative to receive his illicit ransomware earnings.

The relative provided details that proved helpful in deepening Zagala’s link to his ransomware activities as a creator and underground businessman. They revealed that Zagala taught himself computer programming. The contact details they had for Zagala also matched the registered email associated with Thanos infrastructure.

Zagala is facing up to ten years’ imprisonment if convicted.

The post Cardiologist moonlighted as successful ransomware developer appeared first on Malwarebytes Labs.

How iPhones can run malware even when they’re off

Most people think that turning off their iPhone – or letting the battery die – means that the phone is, well, off. The thing is, this isn’t quite true. In reality, most of the phone’s functionality has ended, but there are components that mindlessly continue a zombie-like existence, for the most part unbeknownst to the user.

Even when the battery dies in your iPhone, it’s not truly dead. The phone will shut itself down to conserve the last little bits of power, and will enter a low power mode that is very different from the Low Power Mode that is offered when the battery drops to 20%, and that is found in the battery settings. These last trickles of power are used to keep certain limited functionality active for some time. The same is true of turning the phone off, except that this functionality can stay active much longer with a battery closer to full.

What is this functionality? Most notably, Express Cards – payment cards used with public transit systems – can continue to work in such a state. So can things like digital home or car keys, which seems logical. After all, you don’t want to get locked out just because your iPhone battery died!

More surprising is that the iPhone’s Find My capabilities continue to function. This means that the phone’s location can still be tracked, in a manner similar to how AirTags work, even after it has been turned off.

Is this a problem or not a problem?

Much ado has been made in the past of the use of things like Express Cards, which can be used without authentication. Someone could potentially jostle you in a public place and scan your phone with a fake public transit payment terminal, thus skimming money off the card you have set as an Express Card. That’s 100% possible, but not really all that likely.

Not to mention that there’s a simpler scenario. Someone could pull the same trick with a normal payment terminal, rather than one pretending to be a public transit terminal, and the tap-to-pay cards in your wallet. That’s a much simpler scenario with a much higher probability of success.

Similarly, digital keys could be used to access your car or your home, if someone stole your phone. Of course, that’s assuming they could figure out where your car or your home are from a locked phone, which is a pretty big “if” unless the thief had some prior knowledge.

In this regard, your phone doesn’t really pose much more of a risk than other things you’d have on your person. Of course, this is highly dependent on circumstances. For example, stealing a phone left on a table while the owner’s not paying attention would be a lot easier than stealing a wallet and keys from someone’s pocket. On the other hand, if a thief snatches someone’s purse or backpack, they may get phone, keys, and wallet, and the phone could easily be the least useful of the three.

Find My, on the other hand, is a bigger problem.

What’s the problem with Find My?

The major use cases for Find My are for you to find a lost device, or for someone you’ve shared your location with to find you. So what’s the problem? I mean, these are situations where you fully intend for your phone to be trackable, right? Unfortunately, there are scenarios that are not so beneficial.

Consider stalking or abuse scenarios where the stalker knows your Apple ID credentials, or has been given – through stealth or bullying – the ability to see your location. This is often the case with intimate partner abuse, for example. If you are in such an abusive situation, you may be under the false impression that turning your phone off will temporarily stop the tracking. Alas, that is not the case, and this could be a painful lesson to learn… both literally and figuratively.

However, there’s a possibility of still worse problems, like malware.

Wait… what?! Did you say malware?

Indeed. German researchers recently found that the Bluetooth firmware, responsible for managing the Bluetooth Low Energy (BLE) communication upon which Find My relies, is not cryptographically signed. Since the firmware is not signed, that means that modifications to the firmware cannot be detected without comparing the firmware to a known-good copy of the firmware.

Since BLE communication continues when the phone is off, the researchers found that there is a theoretical possibility that malware on the device could modify the Bluetooth firmware, thus installing malicious code that could continue to run even when the phone appears to be off. The most likely use case for such malware would be to use the BLE tracking capabilities to monitor the phone’s location.

Now, before you go chucking your phone in the garbage or smashing it with a hammer, let’s keep in mind that this is all theoretical at the moment. Compromising the firmware would require a jailbreak, which is not an easy thing to accomplish remotely. Physical access lowers the difficulty level, but it’s still not likely that this technique could be used by most adversaries.

How can I protect myself?

If you’re in a situation where an abuser is monitoring your location, you should be aware that turning off your phone will not stop the tracking. For those in such situations, we advise seeking help, as disabling the tracking could have bad consequences. If you need to not be tracked for a while, leave your phone in a location where it’s reasonable to expect you might spend some time.

When it comes to malware, there’s not much to worry about at present. There’s no known malware using BLE firmware compromise to remain persistent when the phone is “off.” Further, unless you are likely to be targeted by a nation-state adversary – for example, if you are a human rights advocate or journalist critical of an oppressive regime – you’re not likely to ever run into this kind of problem. (If that ever changes, you can be sure we’ll cover that here!)

If you actually are a potential target for a nation-state adversary, don’t trust that your phone is ever truly off. In such a case, a Faraday bag, or a low-tech flip phone, might be a good investment!

The post How iPhones can run malware even when they’re off appeared first on Malwarebytes Labs.

Sysrv botnet is out to mine Monero on your Windows and Linux servers

In a Twitter thread, the Microsoft Security Intelligence team have revealed new information about the latest versions of the Sysrv botnet.

The variant they focused on uses a range of known exploits for vulnerabilities in web apps and databases to install cryptocurrency miners on both Windows and Linux systems.

Background

The Sysrv botnet first received attention at the end of 2020 because at the time it was one of the rare malware binaries written in Golang (aka GO). Since then the botnet has evolved, gained new features, and changed its behavior. One of the advantages of the Golang language for malware authors is that it allows them to create multi-platform malware—the same malware binaries can be used against Windows and Linux machines.

The latest Sysrv variant scans the Internet for web servers that have security holes offering opportunities such as path traversal, remote file disclosure, and arbitrary file download bugs. Really, any vulnerability that can be exploited to infect the machines.

Once it has gained a foothold and the bot malware is running on a compromised system it deploys a Monero cryptocurrency miner.

The favorite cryptocurrency

The most popular cryptocurrency for attackers to mine is Monero. Monero is a cryptocurrency designed for privacy, promising “all the benefits of a decentralized cryptocurrency, without any of the typical privacy concessions”.

No cryptocurrency is anonymous, as many people think, but there are other reasons why cryptojackers favor Monero:

  • Many cryptomining algorithms run significantly better on ASICs or GPUs, but Monero mining algorithms run better on CPUs, which matches what the cryptojacker can expect to find in a containerized environment.
  • Like Bitcoin, Monero is one of the better known cryptocurrencies and therefore is expected to hold its value. That’s a big perk given the unrest in cryptocurrency markets at the time of writing.

With cryptocurrencies, users hide behind a pseudonym, like one or more wallet IDs. Their activities can be tracked—forever—so keeping their identity secret depends on how well they can separate their real identity from their wallet IDs.

Linux malware

While Linux malware was almost unheard of a few years ago, a couple of factors have “helped” the development of malware that targets Linux based systems. One is the development of languages that enable the creation of multiplatform malware like Golang. Another is the usage of Linux as the go-to operating system for many IoT devices.

IoT malware has matured over the years and has become popular, especially among botnets. With billions of Internet-connected devices like cars, household appliances, surveillance cameras, and network devices online, IoT devices are a very large bullseye for botnet malware.

The number of malware infections targeting Linux devices rose by 35% in 2021, most commonly to recruit IoT devices for distributed denial of service (DDoS) attacks. And around 95% of web servers run on Linux.

Vulnerabilities

Like many other botnets, Sysrv weaponizes bugs in WordPress plugins and in the Spring Framework.  It can rifle through WordPress files on compromised machines to take control of web server software. According to Microsoft:

“A new behavior observed in Sysrv-K is that it scans for WordPress configuration files and their backups to retrieve database credentials, which it uses to gain control of the web server.”

The latest Sysrv variant also scans for Secure Shell (SSH) keys, IP addresses, and host names on infected machines so that it can use this information to spread via SSH connections. SSH keys are an access credential used in the SSH protocol and are foundational to modern Infrastructure-as-a-Service platforms such as AWS, Google Cloud, and Azure.

Another vulnerability the botnet uses is CVE-2022-22947. Some Spring cloud gateway version applications are vulnerable to a code injection attack when the Gateway Actuator endpoint is enabled, exposed, and unsecured. A remote attacker could make a maliciously crafted request that could allow arbitrary remote execution on the remote host.

Development

The botnet malware starts with a simple script file that deploys modules of exploits against potentially vulnerable targets. Not only do the developers constantly add new exploits to the code, they keep updating the code. If the exploits aren’t successful, the developers get rid of them. Ever since the first appearance of the Sysrv botnet, the threat actors have released new scripts almost monthly.

Mitigation

Most of the vulnerabilities that the Sysrv botnet uses have been patched, so an effective patch management strategy can be a big help in keeping these miners off your systems.

Another strategy to looks at is whether all the servers that are at risk need to be Internet-facing. In some cases it may be better to take them offline.

Don’t forget to equip your servers with anti-malware protection. The time that you could rest assured that your Linux server would be safe is unfortunately over.

Safeguard your credentials and make sure that multi-factor authentication (MFA) is in place for your important assets.

Stay safe, everyone!

The post Sysrv botnet is out to mine Monero on your Windows and Linux servers appeared first on Malwarebytes Labs.

Long lost @ symbol gets new life obscuring malicious URLs

Threat actors have rediscovered an old and little-used feature of web URLs, the innocuous @ symbol we usually see in email addresses, and started using it to obscure links to their malicious websites.

Researchers from Perception Point noticed it being used in a cyberattack against multiple organization recently. While the attackers are still unknown, Perception Point traced them to an IP in Japan.

The attack started with a phishing email pretending to be from Microsoft, claiming the user has messages that have been embargoed as potential spam. (Using familiar, transactional messages from well-known brands like Microsoft has become a popular tactic for scammers, as a way to defeat spam filters and keen-eyed users.)

The message reads:

You have new 5 held messages.

You can release all of your held messages and permit or block future emails the senders, or manage messages individually.

If the recipient clicks any of the links in the email, they are directed to a phishing page made to look like an Outlook login page.

If the recipient follows the often-repeated advice to hover their pointer over the links before clicking them, to see where they go, they will see this weird-looking URL, and probably be none the wiser:

https://$%^&;****((@bit.ly/3vzLjtz#ZmluYW5jZUBuZ3BjYXAuY29t

This is almost certainly designed to bamboozle users, but to your computer it looks fine. As weird as this URL appears, it is actually valid and acceptable, and your browser will happily parse it for you.

Users who clicked on the link were passed through a chain of redirects before ending up at a phishing page that looks like the Outlook login screen.

Outlook phishing site
The phishing site is a copy of the Outlook login page

Reading the URL

As weird as it looks, the URL in this phishing campaign sticks to the rules of what’s allowed in a web address. The part you see least often is the @ symbol. RFC 3986 refers to anything after https:// and before the @ symobl, highlighted below, as userinfo. This part of the URL is for passing authentication information like a username and password, but it is very rarely used, and is simply ignored as a so-called “opaque string” by many systems.

https://$%^&;****((@bit.ly/3vzLjtz#ZmluYW5jZUBuZ3BjYXAuY29t

The last part of the URL after the # is also ignored when you click the link. This is called the fragment identifier and it represents a piece of the destination page. The browser might use it to scroll to a section of the destination page, or it might be used to pass information to the destination page, but it plays no part in determining what the destination actually is.

https://bit.ly/3vzLjtz#ZmluYW5jZUBuZ3BjYXAuY29t

In this case the fragment ID—ZmluYW5jZUBuZ3BjYXAuY29t—appears to be a unique ID that identifies the email address the phish was sent to. If it’s removed, the link works but when you reach the final destination it simply shows a loading icon, perhaps to hide the site’s true intentions to accidental visitors or researchers.

outlook loading

What we are left with when we remove the parts that of the link that are ignored by the browser is a very ordinary-looking bit.ly link. Exactly the kind of thing you might think is suspicious in an email that says it’s from Microsoft.

https://bit.ly/3vzLjtz

As you probably know, bit.ly is a URL shortening service. The bit.ly link redirects users to another URL, likely used for tracking, which itself redirects users to the phishing page.

Does your browser support the @ symbol?

If you are one of the 2.6 billion people using Chrome, the answer is “yes”, URLs that use the @ symbol work in Chrome and other Chromium-based browsers such as Vivaldi, Brave, and Microsoft Edge.

The latest version of Microsoft’s Internet Explorer doesn’t parse URLs with the @ delimiter though.

Firefox and Firefox-based browsers, such as Tor and Pale Moon, are also affected.

And what about Safari?

According to Thomas Reed, Malwarebytes’ Director of Mac and Mobile, “This technique appears to work in Safari and all other major Mac browsers. Firefox will show a warning when attempting to visit such a link. Unfortunately, Safari—the most popular browser on macOS—does not display a warning and opens the link without objection, as does Chrome.”

Reed also points out that email software will often look for URLs in plain text emails and convert them to clickable links, but the @ symbol seems to prevent this. According to Reed: “The URL used by the phishing campaign does not become a clickable link by itself.” The links will still work in HTML emails, so this isn’t much of a barrier, just a feather in the cap of hold outs who insist on viewing their emails in plain text!

The wide support for the confusing and little-used @ symbol could see it used more widely. In a Threat Post interview, Perception Point’s Vice President of Customer Success and Incident Response, Motti Elloul, predicted that this won’t be the last time we’ll see phishing attacks taking advantage of it.

“The technique has the potential to catch on quickly, because it’s very easy to execute,” he said. “In order to identify the technique and avoid the fallout from it slipping past security systems, security teams need to update their detection engines in order to double check the URL structure whenever @ is included.”

The post Long lost @ symbol gets new life obscuring malicious URLs appeared first on Malwarebytes Labs.

Gmail-linked Facebook accounts vulnerable to attack using a chain of bugs—now fixed

A security researcher has disclosed how he chained together multiple bugs in order to take over Facebook accounts that were linked to a Gmail account.

Youssef Sammouda states it was possible to target all Facebook users but that it was more complicated to develop an exploit, and using Gmail was actually enough to demonstrate the impact of his discoveries.

Linked accounts

Linked accounts were invented to make logging in easier. You can use one account to log in to other apps, sites and services. The most commonly used is the link between Facebook and Instagram, so we will use that as an example. Log in to one account and you are also practically logged in at the other. All you need to do to access the account is confirm that the account is yours.

Since 2009, Facebook has supported myOpenID, which allows users to login to Facebook with their Gmail credentials. To put it in a simpler way, this means that if you are currently logged in to your Gmail account, the moment you visit Facebook, you will be automatically logged in.

Sandboxed CAPTCHA

The first discovery that enabled this takeover method lies in the fact that Facebook uses an extra security mechanism called “Checkpoint” to make sure that any user that logs in is who they claim to be. In some cases Checkpoint present those users with a CAPTCHA challenge to limit the number of tries.

Facebook uses Google CAPTCHA and as an extra security feature the CAPTCHA is put in an iFrame. The iFrame is hosted on a sandboxed domain (fbsbx.com) to avoid adding third-party code from Google into the main domain (facebook.com). An iFrame is a piece of HTML code that allows developers to embed another HTML page on their website.

Now, for some reason, probably for logging purposes, the URL for the iFrame includes the link to the checkpoint as a parameter.

For example, let’s say the current URL is https://www.facebook.com/checkpoint/CHECKPOINT_ID/?test=test. In that case the iframe page would be accessible through this URL: https://www.fbsbx.com/captcha/recaptcha/iframe/?referer=https%3A%2F%2Fwww.facebook.com%2Fcheckpoint%2FCHECKPOINT_ID%2F%3Ftest%3Dtest

The attacker can replace the referrer part in the URL by changing it into a next parameter. This allows the attacker to send the URL including the login parameters to the sandbox domain. Now it is time to find a way to grab it from there, which is where cross-side-scripting (XSS) comes in.

XSS

XSS is a type of security vulnerability, and can be found in some web applications. XSS attacks enable attackers to inject client-side scripts into web pages viewed by other users. Attackers can use a cross-site scripting vulnerability to bypass access controls such as the same-origin policy.

The same-origin policy (SOP) is where a web browser permits scripts contained in a first web page to access data in a second web page, but only if both web pages have the same origin. This policy prevents a malicious script on one page from obtaining access to sensitive data on another web page.

In this case that step was easy, since Facebook allows developers to test certain features and makes it possible for them to upload custom HTML files. The creator can upload these HTML files to the fbsbx.com domain. Which, as we saw earlier, is also in use for the Google CAPTCHA. Which allows the attacker to bypass the same origin policy since the target site and the custom script are on the same domain.

CSRF

CSRF is short for cross-site request forgery. In a CSRF attack, an innocent end user is tricked by an attacker into submitting a web request that they did not intend. This may cause actions to be performed on the website that can include inadvertent client or server data leakage, change of session state, or manipulation of an end user’s account.

In his attack script, Youssef used undisclosed CRSF attacks to log the target user out and later log them back in through the Checkpoint.

OAuth

OAuth is a standard authorization protocol. It allows us to get access to protected data from an application. An OAuth Access Token is a string that the OAuth client uses to make requests to the resource server.

In this case, attackers can log out the current user and then log them back in to the attacker account which is in the Checkpoint state. But how does that allow the attacker to take over the Facebook account? By intercepting an OAuth Access Token string.

This is done by targeting a third-party OAuth provider that Facebook uses. One of these providers is Gmail. Gmail sends back the OAuth Access token to www.facebook.com for the logged in user. And since the attacker can steal the URL including the login parameters by sending them to the sandbox domain, they can intercept the OAuth Access Token string and the id_token of the user.

Takeover

Summarized, the attacker can upload a script to the Facebook sandbox and try to trick his target(s) into visiting that page by sending them the URL.

Simplified, the script will:

  1. Log out the user from his current session (CSRF)
  2. Send them to the Checkpoint to log back in (CSRF)
  3. Open a constructed accounts.google.com URL that redirects the target to Facebook.

Once the target has visited the page with the script outlined above, the attacker can start harvesting the strings they need to take over the Facebook account.

  1. The attacker waits for the victim to log in and can later extract the Google OAuth Access Token string and id_token
  2. Using the email address included in the id_token they can start a password recovery process
  3. Now the attacker can construct a URL to access the target account with all the data they have gathered

How to unlink accounts

Some sites will offer to log you in using your Facebook credentials. The same reasoning that is true for using the same password for every site is true for using your Facebook credentials to login at other sites. We wouldn’t recommend it because if anyone gets hold of the one password that controls them all, you’re in even bigger trouble than you would be if only one site’s password is compromised.

You can check which accounts are linked to your Facebook account by opening the Facebook settings menu. Scroll down and open Settings & Privacy, then open Settings. At the bottom on the left, use the Accounts Center button. Tap Accounts & Profiles. There you can see a list ofthe accounts linked to your Facebook account. You can remove any unwanted linked accounts there.

Facebook fix

Youssef says he reported the issue to Facebook in February. It was fixed in March and a $44,625 bounty was awarded earlier this month.

We interviewed this Youssef last year. He told us he’s submitted at least a hundred reports to Facebook which have been resolved, making Facebook a safer platform along the way.

The post Gmail-linked Facebook accounts vulnerable to attack using a chain of bugs—now fixed appeared first on Malwarebytes Labs.

Update now! Apple patches zero-day vulnerability affecting Macs, Apple Watch, and Apple TV

Apple has released security updates for a zero-day vulnerability that affects multiple products, including Mac, Apple Watch, and Apple TV.

The flaw is an out-of-bounds write issue—tracked as CVE-2022-22675—in AppleAVD, a decoder that handles specific media files.

An out-of-bounds write or read flaw makes it possible to manipulate parts of the memory which are allocated to more critical functions. This could allow an attacker to write code to a part of the memory where it will be executed with permissions that the program and user should not have.

Attackers could take control of affected devices if they exploit this flaw.

CVE-2022-22675 is the same vulnerability that affected macOS Monterey 12.3.1, iOS 15.4.1, and iPad 15.4.1. The flaw for these was patched in March.

This latest batch of updates has improved bounds checking for additional Apple products running specific operating systems, particularly macOS Big Sur 11.6.6, watchOS 8.6, and tvOS 15.5. These OSs are installed in Apple Macs running Big Sur, Apple Watch Series 3 and later, and Apple TV (4K, 4K 2nd generation, and 4K HD).

Apple says it’s aware this flaw is currently being abused in the wild. It didn’t go into detail, likely to give customers time to patch up their Apple devices.

BleepingComputer has noted that attacks against CVE-2022-22675 might only be targeted in nature. However,if you’re using any or all of the above Apple products we mentioned, it is still wise to apply updates as soon as you can.

Stay safe!

The post Update now! Apple patches zero-day vulnerability affecting Macs, Apple Watch, and Apple TV appeared first on Malwarebytes Labs.

Car owners warned of another theft-enabling relay attack

Tesla owners are no strangers to seeing reports of cars being tampered with outside of their control. Back in 2021, a zero-click exploit aided a drone in taking over the car’s entertainment system. In 2016, we had a brakes and doors issue. 2020 saw people rewriting key-fob firmware via Bluetooth. Andin January this year, a teen claimed he had managed to remotely hack into 25 Tesla vehicles.

This time, we have another Bluetooth key-fob issue making waves. Although there is a Tesla specific advisory, there are also advisories for this issue generally and a type of smart lock.

Bluetooth Low Energy and keyless entry systems

The researchers who discovered this issue are clear that it isn’t “just” a problem for Tesla. It’s more of a problem related to the Bluetooth Low Energy (BLE) protocol used by the keyless entry system. Bluetooth is a short-range wireless technology which uses radio frequencies and allows you to share data. You can connect one device to another, interact with Bluetooth beacons, and much more. Bluetooth is a perfect fit for something as commonplace as keyless door entry.

As the name suggests, BLE is all about providing functionality through very low energy consumption. As BLE is only active for very short periods of time, it’s a much more efficient way to do things.

The relay attack in action

Researchers demonstrated how this compromise of the keyless system works in practice. Though light on details, Bloomberg mentions it is a relay attack. This is a fairly common method used by people in the car research realm to try and pop locks.

To help describe a relay attack, it’s common to first explain how a Man in the Middle (MitM) attack works:

In cybersecurity, a Man-in-the-Middle (MitM) attack happens when a threat actor manages to intercept and forward the traffic between two entities without either of them noticing. In addition, some MitM attacks alter the communication between parties, again without them realizing.

For relay attacks, think of two people (or one person with two devices) sliding their way into the device-based communication. Some of the diagrams I’ve seen explaining this attack can be a little confusing, but this video explanation is perfect:

As you can see, two people approach the car. One pulls the handles to trigger the car’s security system into sending out a message. “Are you the owner of this car, are your keys the correct keys for this vehicle?” The authentication challenge is beamed out into the void. The second person is standing by the house with a device.

People often leave their car keys close to the front door. As a result, the keys will be within range of the second person’s device. It takes the fob’s response and beams it back to the criminal by the car. The device in their hand relays the fob’s authentication confirmation to the car and the door unlocks. They then repeat this process a second time. This is to fool the car into thinking the keys are present, at which point they’re able to drive away.

A gear-shift in criminal perspective

Criminals are after maximum gain for minimum effort. They don’t want to attract attention from law enforcement. The sneakier they can be, the less commotion they cause, and the better it’s going to be for them in the long-term.

Think about how seamless a relay approach is to car theft. It’s quick, it’s easy, and it’s completely silent. Consider how much money a professional outfit pulling these car heists can generate. The alternative is messy break-ins, noise, rummaging for keys in a house full of screaming people and barking dogs. Not to mention a significantly increased chance of being caught. If you were a career criminal, which approach would you favour?

A problem which refuses to go away

Relay attacks on cars have been around for several years now. Stolen vehicles are the go-to example of relay attacks if you go looking for more information on the technique. Advice for avoiding relay attacks is widespread, from keeping keys away from the front door (which you should do anyway) to placing them in a signal-blocking bag.

For the Tesla specific attack, a relay device was placed “within roughly 15 yards” of the smartphone/key-fob, with the other plugged into a laptop close to the vehicle. You can see more information about the more general forms of attack here.

The article mentions that there’s no evidence of this Tesla tomfoolery having happened in the wild. Even so, relay attacks can and do take place. If your car operates a keyless system, take this latest report as a heads-up to ensure your vehicle is safe from attack no matter the make or model.

The post Car owners warned of another theft-enabling relay attack appeared first on Malwarebytes Labs.

AirTag stalking: What is it, and how can I avoid it?

More voices are being raised against the use of everyday technology repurposed to attack and stalk people. Most recently, it’s reported that Ohio has proposed a new bill in relation to electronic tagging devices.

The bill, aimed at making short work of a loophole allowing people with no stalking or domestic violence record to use tracking devices, is currently in the proposal stages. As PC Mag mentions, 19 states currently ban the use of trackers to aid stalking.

Dude, where’s my car?

Using tech to find missing items is nothing new. Back in the 80s, my dad had one of the new wave of tools used to find your lost keys. You put a small device on your keychain, and when they inevitably went missing, you whistled. The device, assuming it was nearby, would beep or whistle back. That is, it would if the range wasn’t awful and it frequently didn’t respond to your best whistle attempts.

Skip forward enough years, and we had similar concept but with Bluetooth and Radio Frequency. But the range on them isn’t great and so the use is limited.

Step up to the plate, tracker devices.

What is an AirTag?

There are many types of tracking device, but AirTags are unfortunately for Apple the one most closely associated with this form of stalking.

Find My, an app for Apple mobiles, is an incredibly slick way to keep track of almost any Apple product you can think of. Making your lost phone make a noise, offline finding, and sending the last location when battery is low are some of the fine-tune options available.

An AirTag is a small round device which plugs right into the Find My options. The idea is a supercharged version of ye olde key whistler. Misplace an item attached to an AirTag, and when you get close enough you’ll even have Precision Finding kicking in to guide to the lost item.

This is all incredibly helpful, especially if you’re good at misplacing things. Even better if something is stolen. Where it goes wrong is when people with bad intentions immediately figure out ways they can harass people with it.

A stalker’s life for me

Back in January, model Brooks Nader claimed someone placed an AirTag in her coat. Whoever was responsible used it to follow her around for several hours. She only became aware of what was happening because her phone alerted her to the tag’s presence.

However, this is an Apple-specific product, which means not all devices will be able to flag it. Android users are resorting to downloading standalone apps which can flush out unwanted AirTag stalkers. Meanwhile, the case numbers themselves are steadily increasing across multiple regions. Smart stalkers will place tags on items or in places victims won’t suspect. A tag under the car means victims may never even find out they’ve been stalked in the first place.

Apple pushes back on AirTag stalking

This isn’t great news for any company faced with a sudden wave of people abusing their devices. Apple is trying to lead the charge against these practices by making it harder for stalkers.

  • Improving the accuracy of “unknown accessory detected” notices
  • Adding support documents for people who believe they may be being stalked.
  • Implementing notices which say “tracking without consent is a crime”

Advice for people worried about AirTag stalking

Apple’s support document lists two ways to discover unwanted tracking.

  1. If you have an iPhone, iPad, or iPod touch, Find My will send a notification to your Apple device. This feature is available on iOS or iPadOS 14.5 or later. To receive alerts, make sure that you:
    Go to Settings > Privacy > Location Services, and turn Location Services on.
    Go to Settings > Privacy > Location Services > System Services. Turn Find My iPhone on.
    Go to Settings > Privacy > Location Services > System Services. Turn Significant Locations on to be notified when you arrive at a significant location, such as your home.
    Go to Settings > Bluetooth, and turn Bluetooth on.
    Go to the Find My app, tap the Me tab, and turn Tracking Notifications on.
  2. If you don’t have an iOS device or a smartphone, an AirTag that isn’t with its owner for a period of time will emit a sound when it’s moved. This type of notification isn’t supported with AirPods.

Any alert on your mobile device that a tracker is nearby allows you to make the tracker produce a noise via your phone. You can make this noise repeat as often as you want until the device is found.

Disabling the AirTag

If you can’t find the physical object, don’t worry. You can disable it, again using your phone. Apple’s advice:

To disable the AirTag, AirPods, or Find My network accessory and stop it from sharing its location, tap Instructions to Disable and follow the onscreen steps. After the AirTag, AirPods, or Find My network accessory is disabled, the owner can no longer get updates on its current location. You will also no longer receive any unwanted tracking alerts for this item.

Apple has been quite visible in both drawing attention to the problem and providing accessible and straightforward solutions to shutting unwanted tracking down. We can only hope that other companies whose trackers are being misused in this way are doing their part too.

The post AirTag stalking: What is it, and how can I avoid it? appeared first on Malwarebytes Labs.

A week in security (May 9 – 15)

Last week on Malwarebytes Labs:

Stay safe!

The post A week in security (May 9 – 15) appeared first on Malwarebytes Labs.

Fake reCAPTCHA forms dupe users via compromised WordPress sites

Researchers at Sucuri investigated a number of WordPress websites complaining about unwanted redirects and found websites that use fake CAPTCHA forms to get the visitor to accept web push notifications.

These websites are a new wave of a campaign that leverages many compromised WordPress sites.

CAPTCHA

CAPTCHA (“Completely Automated Public Turing test to tell Computers and Humans Apart”) is one of the annoyances that we have learned to take for granted when we browse the Internet. Scientists developed CAPTCHA as a method to tell humans and bots apart so as to to keep bots from accessing sites or systems where they are not welcome.

Google bought and owns reCAPTCHA, which represents a CAPTCHA system expressly developed to reduce the needed amount of user interaction. The original version asked users to decipher hard to read text or match images. Version 2 required users to decipher text or match images if the analysis of cookies and canvas rendering suggested an automatic download of the page. Since version 3, reCAPTCHA doesn’t interrupt users, running automatically when users load pages or click buttons.

The basic version of a real reCAPTCHA the threat actors used as a template to create the fake ones looks like this:

real reCAPTCHA
legitimate reCAPTCHA

The campaign

The fake CAPTCHA sites are part of a long lasting campaign responsible for injecting malicious scripts into compromised WordPress websites. This campaign leverages known vulnerabilities in WordPress themes and plugins and has impacted an enormous number of websites over the years.

The compromised websites all share a common issue. The threat actors injected malicious JavaScript within the affected website’s files and database. Attackers attempted to automatically infect any .js file with jQuery in the name, on a compromised website. They then injected obfuscated code when successful. This malicious JavaScript was appended under the current script or under the head of the page where it was fired on every page load, redirecting site visitors to the destination chosen by the threat actor.

The Malwarebytes Threat Intelligence Team tracked a rogue affiliate’s traffic which flowed through the same local[.]drakefollow[.]com subdomain that was mentioned in the Sucuri blog. The threat actor chose to promote a legitimate security product in this case, but might as well have led visitors to potentially unwanted programs (PUPs), adware, or tech support scams.

Wireshark image showing traffic flow plus site of the affiliate
Traffic flow from compromised WordPress site to rogue affilate’s site

The fake CAPTCHA

At this point in the chain of redirections, the fake reCAPTCHA websites kick in. The fake reCAPTCHA sites are the final step towards duping the visitor. The unsuspecting visitor will land on a site that tries to trick them into accepting push notifications from the landing page’s domain.

fake reCAPTCHA
fake reCAPTCHA

Visitors think they need to click “Allow” to get past the CAPTCHA screen, when in fact they are giving permission to the domain to send them push notifications.

By design, push notifications work similarly across different operating systems and web browsers. They appear outside of the browser window just above the taskbar on the right hand side. This is misleading as they may seem to originate from the operating system. Knowing the difference between a web push notification and an alert that comes from the operating system or another program installed on the device is hard, and that makes it difficult for the unsuspecting user of an affected system to know what is going on.

As we reported in the past, adware, search hijackers, and PUP families have added push notifications as one of their attack vectors. Sucuri warns that it is also one of the most common ways attackers display “tech support” scams, where users are told their computer is infected or slow and they should call a toll-free number to fix the problem.

Removal and mitigation

Knowing that these fake reCAPTCHA sites exist and being able to spot the difference with a real one is your best protection. Also, many security programs, including Malwarebytes, will block access to the campaign’s domains.

If your system shows you push notifications, you can find detailed instructions on how to disable and remove permissions for browser push notifications in our article: Browser push notifications: a feature asking to be abused.

Website owners can use Sucuri’s free remote website scanner to detect the malware.

Stay safe, everyone!

Special thanks to the Malwarebytes Threat Intelligence Team for their contribution and the screenshot

The post Fake reCAPTCHA forms dupe users via compromised WordPress sites appeared first on Malwarebytes Labs.