IT NEWS

Malware on the Google Play store leads to harmful phishing sites

A family of malicious apps from developer Mobile apps Group are listed on Google Play and infected with Android/Trojan.HiddenAds.BTGTHB. In total, four apps are listed, and together they have amassed at least one million downloads.

Older versions of these apps have been detected in the past as different variants of Android/Trojan.HiddenAds. Yet, the developer is still on Google Play dispensing its latest HiddenAds malware.

This follows on the heels of adware that was found on Google Play just a couple months ago from a rogue PDF reader.

Play1Play2Play3Play4Play5Play6

Delayed ungratification

Our analysis of this malware starts with us finding an app named Bluetooth Auto Connect (full app information at the bottom of this article). When users first install this malicious app, it takes a couple of days before it begins to display malicious behavior.  Delaying malicious behavior is a common tactic to evade detection by malware developers.  It turns out that this app uses delays quite a bit, as you’ll discover in our analysis.

After the initial delay, the malicious app opens phishing sites in Chrome. The content of the phishing sites varies—some are harmless sites used simply to produce pay-per-click, and others are more dangerous phishing sites that attempt to trick unsuspecting users.  For example, one site includes adult content that leads to phishing pages that tell the user they’ve been infected, or need to perform an update.

The Chrome tabs are opened in the background even while the mobile device is locked.  When the user unlocks their device, Chrome opens with the latest site.  A new tab opens with a new site frequently, and as a result, unlocking your phone after several hours means closing multiple tabs.  The users browser history will also be a long list of nasty phishing sites.

phish_site2phish_site3phish_site4phish_site5

Deeper analysis using LogCat

As per my last blog post, I once again used an Android OS test phone and plugged it into my laptop running LogCat via good old Android Device Monitor. To clarify, LogCat is used to observe all logs created by installed apps and the Android OS, including the logs of this malware.  The first log entry from this malware came several hours after the initial installation.

10-20 05:11:07.504: D/sdfsdf(11987): {"adDelay":7200000,"flurryId":"YQBTHDXPVMFT3D7Z7Q92","chromeLink":"https://<phishing_URL>.com/?ts=1666264263370&id=344","showOuterAd":true,"firstAdDelay":259200000,"versionWithNoAd":"no"}

The first important datapoint of the log entry is what LogCat calls the Tag.  This usually is a descriptor of the log text like ActivityManager. In this case, they use an obfuscated tag of sdfsdf — another sign of willful deception. Diving into the Text segment of the log, where the important data is stored, there are couple of key datapoints: adDelay, chromeLink, and firstAdDelay.

First, the chromeLink is the URL of the phishing site to open in Chrome. Next, let’s look at the firstAdDelay datapoint with the value of 259200000. This value is the length of delay to displaying the first ad in milliseconds—seventy-two hours. Add the several hours to this delay before the log entry is created, and you have roughly four days from the time the malicious app is installed to when it displays the first ad in Chrome. 

Keep in mind that the delay length of each malware app varies.  Additionally, after the first ad is displayed, it then has an adDelay of 7200000, or two hours.  It’s unclear if that means to wait an additional two hours after the first ad delay, or display another ad two hours after the first ad.  Regardless, it is another example of using delays to obfuscate detection.  These type of log entries are recorded every fifteen minutes, constantly setting new time released ads.

After the delay time ends, the ad is then triggered to display.  At this instant, it creates additional log entries using tag ActivityManager.

10-24 08:26:30.476: I/ActivityManager(765): START u0 {act=android.intent.action.VIEW dat=https:// <phishing_URL>.com/... flg=0x14002000 pkg=com.android.chrome cmp=com.android.chrome/org.chromium.chrome.browser.ChromeTabbedActivity (has extras)} from uid 10062
10-24 08:26:31.026: W/ActivityManager(765): Activity pause timeout for ActivityRecord{736d893 u0 com.android.chrome/org.chromium.chrome.browser.ChromeTabbedActivity t11780}

These log entries are representative of when Chrome opens a new tab with a phishing site using activity ChromeTabbedActivity. After that point, unlocking the mobile device will reveal the ad.

Tracing it back to code

Now that we have LogCat entries, the next step in our analysis is to trace back to where in the code this malicious behavior is happening.  To do that, we first need to look in the app’s Manifest file.

The Manifest file is basically a guide for the Android OS to use to run activities, services, and receivers of an app.  Each activity, service, and receiver contains code to be ran. Every Android app has a Manifest file.

Many times, the activities, services, and receivers used by a particular malware is unique.  However, at first glance at this malware it is hard to tell which activities, services, or receivers are running the malicious code.  This is where the LogCat entries can assist.  These logs are the smoking gun of exactly what activities, services, or receivers are triggering malicious behavior. Ironically, their attempt to obfuscate detection using a LogCat tag of sdfsdf made tracking the culprit easy. A quick search of sdfsdf in the code reveals it traces back to service name com.github.libpackage.service.PushService, and activity name com.github.libpackage.view.NotificationActivity. The use of the popular GitHub in the naming convention is yet another blatant attempt to obfuscate detection.  From there, we were able to further verify using the additional datapoints from the LogCat text.

History of HiddenAds

Continuing to focus on Bluetooth Auto Connect, this app has had a long history of being infected with different variants of HiddenAds.  Note that other apps from Mobile apps Group have a similar history. 

  • Date of release 2020-12-??: Bluetooth Auto Connect v1.4 infected wtih Android/Trojan.HiddenAds.llib
  • Date of release 2021-01-05: Bluetooth Auto Connect v1.8 infected wtih Android/Trojan.HiddenAds.llib
  • Date of release 2021-01-11: Bluetooth Auto Connect v1.9 infected wtih Android/Trojan.HiddenAds.llib
  • Date of release 2021-01-19: Bluetooth Auto Connect v2.2 infected wtih Android/Trojan.HiddenAds.llib
  • Date of release 2021-01-22: Bluetooth Auto Connect v2.3 clean
  • Date of release 2021-02-09: Bluetooth Auto Connect v2.6 infected wtih Android/Trojan.HiddenAds.ATASHT
  • Date of release 2021-02-10: Bluetooth Auto Connect v2.7 infected wtih Android/Trojan.HiddenAds.ATASHT
  • Date of release 2021-02-12: Bluetooth Auto Connect v2.9 infected wtih Android/Trojan.HiddenAds.ATASHT
  • Date of release 2021-02-26: Bluetooth Auto Connect v3.0 clean
  • Date of release 2021-03-04: Bluetooth Auto Connect v3.1 clean
  • Date of release 2021-04-26: Bluetooth Auto Connect v3.8 clean
  • Date of release 2021-06-11: Bluetooth Auto Connect v4.0 clean
  • Date of release 2021-07-22: Bluetooth Auto Connect v4.1 clean
  • Date of release 2021-10-21: Bluetooth Auto Connect v4.5 clean
  • Date of release 2021-12-15: Bluetooth Auto Connect v4.6 infected wtih Android/Trojan.HiddenAds.BTGTHB
  • Date of release 2021-10-21: Bluetooth Auto Connect v4.8 infected wtih Android/Trojan.HiddenAds.BTGTHB
  • Date of release 2022-08-02: Bluetooth Auto Connect v5.4 infected wtih Android/Trojan.HiddenAds.BTGTHB
  • Date of release 2022-08-17: Bluetooth Auto Connect v5.5 infected wtih Android/Trojan.HiddenAds.BTGTHB
  • Date of release 2022-10-12: Bluetooth Auto Connect v5.7 infected wtih Android/Trojan.HiddenAds.BTGTHB (current version on Google Play)

It is disappointing that Mobile apps Group has persisted on the Google Play store after having malicious apps in the past — twice!  It’s unclear if previous malicious versions from before January 19, 2022—versions 2.2 and before—were ever caught by Google Play.  Since version 2.3 was clean, it seems likely that the developers were caught and uploaded a clean version.

What we do know is that DrWeb blogged about Bluetooth Auto Connect v2.5 having what it calls Adware.NewDich back in February 24, 2021.  We can only assume Google Play took action at that point by removing the most current malicious version at the time of the writing—version 2.9.  However, on February 26, just two days after the DrWeb blog, the developers released the clean version 3.0 onto Google Play. That meant Mobile apps Group remained on Google Play without even a probation period.

As a result of having two strikes from Google Play, the developers cleaned up their act from version 3.0 to 4.5, or Febraury 26 to October 10, 2021.  Then, on December 15, 2021, the developers released the code for the most current HiddenAds variant in version 4.6.  Now on version 5.7, that malicious code remains to this date.  A run of over ten months with malicious code on Google Play.  Perhaps its time to say three strikes and you’re out to Mobile apps Group.

More than just adware

With all the evidence of malicious behaviors, one can only assume this is more than just adware that’s surpassing Google Play Protect detection. With a heavy dose of obfuscation and harmful phishing sites, this is clearly the malware we know as Trojan HiddenAds. Thanks to our Malwarebytes support team and our customers, we were able to track down this nasty malware.  As always, you can remediate using our free scanner, Malwarebytes Mobile Security.

App information

Package name: com.bluetooth.autoconnect.anybtdevices

App name: Bluetooth Auto Connect

Developer: Mobile apps Group

MD5: C28A12CE5366960B34595DCE8BFB4D15

Google Play URL: https://play.google.com/store/apps/details?id=com.bluetooth.autoconnect.anybtdevices

Package name: com.driver.finder.bluetooth.wifi.usb

App Name: Driver: Bluetooth, Wi-Fi, USB

Developer: Mobile apps Group

MD5: 9BC55834B713B506E92B3787BE83F079

Google Play URL: https://play.google.com/store/apps/details?id=com.driver.finder.bluetooth.wifi.usb

 

Package name: com.bluetooth.share.app

App Name: Bluetooth App Sender

Developer: Mobile apps Group

MD5: F764F5A04859EC544685E30DE4BD3240

Google Play URL: https://play.google.com/store/apps/details?id=com.bluetooth.share.app

  

Package name: com.mobile.faster.transfer.smart.switch

App Name: Mobile transfer: smart switch

Developer: Mobile apps Group

MD5: AEA33292113A22F46579F5E953596491

Google Play URL: https://play.google.com/store/apps/details?id=com.mobile.faster.transfer.smart.switch


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

LinkedIn introduces new security features to combat fake accounts

LinkedIn knows it has a problem with bots and fake accounts, and has acknowledged this on more than one occasion. For years, it has been aware of spam, fake job offers, phishing, fraudulent investments, and (at times) malware, and has been trying to combat those issues.

In 2018, LinkedIn rolled out a way to automatically detect fake accounts. It also gave users an inside look into what’s going on behind the scenes: A dedicated team constantly analyzing abusive behavior, risk signals, and patterns of abuse; tools that are continuously improving; and the company investing in AI technologies aimed at detecting communities of fake accounts.

Now, LinkedIn is rolling out new security features to support its cause further. As Oscar Rodriguez said in his post on the LinkedIn blog:

“I am eager to share that as part of our ongoing commitment to keeping LinkedIn a trusted professional community, we are rolling out new features and systems to help you make more informed decisions about members that you are interacting with and enhancing our automated systems that keep inauthentic profiles and activity off our platform. Whether you are deciding to accept an invitation, learning more about a business opportunity, or exchanging contact information, we want you to be empowered to make decisions having more signals about the authenticity of accounts.”

What’s new?

The “About this profile” feature. This new section in a LinkedIn profile will contain information about when the profile was first created, when it was updated last, and indications of whether the account is associated with a verified phone number or work email address. This feature has already been rolled out.

easset upload file93484 242901 e(Source: LinkedIn)

Tech that analyzes profile pictures. As AI-based synthetic image generation technology—often called deepfake—has grown in sophistication, tech has become indispensable in helping us filter genuine profile photos from AI creations. LinkedIn’s deep-learning tech looks for subtle image artifacts, which may be invisible to the naked eye, associated with images created using AI. Accounts with positive detections will be removed before they can be used to reach out to members.

Flags that alert users of suspicious behavior. One known tactic of those with ill intent is encouraging their potential victim to continue their conversation away from the social platform they first met in favor of another communication medium, usually via email or IM. Scammers and fraudsters have employed this same tactic on LinkedIn. The platform now warns potential targets when the person they’re talking to suggests they move elsewhere.

easset upload file51838 242901 e(Source: LinkedIn)

“This sender appears to be trying to move the conversation off LinkedIn. We recommend you review these safety tips before proceeding.” reads the warning. Clicking “View message anyway” displays the sender’s message, which LinkedIn initially blocks unless the receiver wants to view it.

Stronger together

While we see the tools grow that keep users safe and give them confidence in their decision-making regarding online safety, the community’s involvement remains a powerful and effective deterrence against cybercriminals. LinkedIn encourages its users to be wary and report anything strange they see within the platform, such as:

  • People asking for money (in the form of cryptocurrency or gift cards) so you can claim a prize or other winnings
  • People expressing their romantic interest in you (this is generally frowned upon and is considered highly inappropriate on the platform)
  • A job posting that sounds too good to be true
  • A job posting that asks for an upfront fee for anything.

Keep in mind these red flags, too:

  • Profiles with abnormal profile images
  • Profiles with inconsistencies in their work history and education
  • Profiles with bad grammar. Question the credibility and legitimacy of such profiles
  • New profiles with no common connections, generic names, or few connections

Stay safe!


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A week in security (October 24 – 30)

Last week on Malwarebytes Labs:

Stay safe!

Raspberry Robin worm used as ransomware prelude

Raspberry Robin aka Worm.RaspberyRobin started out as an annoying, yet relatively low-profile threat that was often installed via USB drive. First spotted in September 2021, it was typically introduced into a network through infected removable drives, often USB devices.

Now the worm has been found to be the foothold for more serious threats like ransomware as laid out in this Microsoft Security blog. Microsoft warns that the worm has triggered payload alerts on devices of almost 1,000 organizations in the past 30 days.

Primary infection

Initially, the Raspberry Robin worm often appears as a shortcut .lnk file masquerading as a legitimate folder on the infected USB device. The name of the lnk file was recovery.lnk which later changed to filenames associated with the brand of the USB device. Raspberry Robin uses both autoruns to launch and social engineering to encourage users to click the LNK file.

Raspberry Robin’s LNK file points to cmd.exe to launch the Windows Installer service msiexec.exe and install a malicious payload hosted on compromised QNAP network attached storage (NAS) devices.

Infrastructure

A NAS device is a storage server connected to a computer network, storing data that can be accessed by a wide variety of devices, including Windows, macOS, and other systems. In real life this usually means they are used as an external hard-drive that can be accessed over an intranet or the internet. There are several vulnerabilities in QNAP devices for which patches are available, but unfortunately many of them remain unpatched due to unawareness.

Backdoor

To be able to act as a backdoor, malware needs to be active or you need to be able to trigger it remotely. Raspberry Robin gains persistence by adding itself to the RunOnce key in the CurrentUser registry hive of the user who executed the initial malware.

By using command-and-control (C2) servers hosted on Tor nodes the Raspberry Robin implant can be used to distribute other malware.

Guests

As an established access provider in the current malware-as-a-service landscape you can make money by selling the access to affected networks to other malware operators like ransomware groups. Microsoft found that Raspberry Robin has been used to facilitate FakeUpdates (SocGholish), Fauppod, IcedID, Bumblebee, TrueBot, LockBit, and human-operated intrusions.

Fauppod is heavily obfuscated malweare that is also used to spread FakeUpdates, and writes Raspberry Robin to USB drives. TrueBot Trojans are used in targeted attacks for reconnaissance purposes.

An example of the human-operated intrusions was the deployment of Cobalt Strike to deliver the Clop ransomware.

Stop the worm

In Windows, the autorun of USB drives is disabled by default. However, many organizations have widely enabled it through legacy Group Policy changes, according to Microsoft. If you enabled it, this is a policy worth re-thinking.

Owners of QNAP devices should be aware of the fact that they are not only putting their own files at risk by not applying the patches, but they are providing malware authors with a free-to-use infrastructure to victimize others.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A Chrome fix for an in-the-wild exploit is out—Check your version

Google has announced an update for Chrome that fixes an in-the-wild exploit. Chrome Stable channel has been updated to 107.0.5304.87 for Mac and Linux, and 107.0.5304.87/.88 for Windows.

The vulnerability at hand is described as a type confusion issue in the V8 Javascript engine.

Mitigation

If you’re a Chrome user on Windows, Mac, or Linux, you should update as soon as possible. Most of the time, the easiest way to update Chrome is to do nothing—it should update itself automatically, using the same method as outlined below but without your involvement. However, if something goes wrong—such as an extension blocking the update—or if you never close your browser, you can end up lagging behind on your updates.

So, it doesn’t hurt to check now and again. And now would be a good time, given the severity of the vulnerabilities in this batch.

My preferred method is to have Chrome open the page chrome://settings/help, which you can also find by clicking Settings > About Chrome.

Chrome updatingUpdating Chrome

If there is an update available, Chrome will notify you and start downloading it. Then all you have to do is relaunch the browser in order for the update to complete.

Chrome is up to dateChrome is up to date

After the update the version should be 107.0.5304.87 or later.

CVE-2022-3723

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services).

This is the one that urged the out of bounds update was CVE-2022-3723, a type confusion issue with Chrome’s V8 JavaScript engine. A remote attacker could exploit this vulnerability to trigger data manipulation on the targeted system.

Type confusion is possible when a piece of code doesn’t verify the type of object that is passed to it. The program allocates or initializes an object using one type, but it later accesses it using a type that is incompatible with the original. Details about the vulnerability will not be released before everyone has had a chance to update, but it seems that in this case the manipulation with an unknown input can lead to privilege escalation.

The V8 engine is a very important component within Chrome that’s used to process JavaScript commands. A very similar vulnerability was found in March of 2022. This was also a type confusion issue in the V8 engine, which turned out to affect other Chromium based browsers as well. So keep an eye out for updates on any other Chromium based browser you may be using, such as Edge.

Dormant Colors browser hijackers could be used for more nefarious tasks, report says

Researchers from Guardio, a cybersecurity company specializing in web browser protection, recently revealed a campaign involving a trove of popular yet malicious extensions programmed to steal user searches, browsing data, and affiliation to thousands of targeted sites.

Nicknamed “Dormant Colors,” this campaign involves at least 30 variants of browser extensions for Chrome and Edge, once available in their respective stores (you can’t find them there now). The campaign was named as such because all the extensions offer browser color customization options, and their “maliciousness” lie dormant until triggered by their creator.

easset upload file97608 241534 e

The inexhaustive list of 30 browser extensions belonging to the Dormant Colors campaign. Note these are extension names with their icons. (Source: Guardio)

According to researchers, the campaign starts with malvertising in the form of ads on web pages or redirects from offered video and download links. If a site visitor attempts to download what an ad offers or watch a video stream, they are redirected to a page informing them they need to download an extension first. Of course, an extension is never required. It’s part of the campaign to make users believe an extension download is needed.

Once visitors confirm the download, one of the 30 extensions above is installed on the browser. The extension then redirects users to various pages that surreptitiously side-load malicious scripts, which instruct it to begin hijacking user searches and inserting affiliate links.

When hijacking user searches, the extension redirects search query results to display results from sites affiliated with the extension developers. Doing this gives them money from ad impressions and the sale of search data.

Another way that surreptitious extension developers wrongfully gain money is by redirecting users to the same page but with an affiliate link appended to the URL. For example, a user visits 365games.co.uk to buy video game merchandise. After the default page to this site finishes loading, the extension redirects the user to the same page but with an affiliate link included. The URL in the address bar would look something like this: 365games.co.uk/{affiliate-related string}.

Users visiting Amazon, AliExpress, and porn sites should expect to see affiliate redirections when hit with this campaign. 

It’s worrying that the average internet user hardly notices this campaign’s quick and easy money-making schemes because it has the potential to go beyond hijacking and URL sleight-of-hand. Guardio researchers say developers could program their extensions to direct users to phishing pages to steal credentials, especially those used to log in to work-related accounts. They could also write side-loaded code telling the extension to point users to a malware download site.

“This campaign is still up and running, shifting domains, generating new extensions, and re-inventing more color and style-changing functions you can for sure manage without,” said Guardio researchers in their full write-up. “At the end of the day, it’s not only affiliation fees being collected on your back, this is your privacy as well as your internet experience being compromised here, in ways that can target organizations by harvesting credentials and hijacking accounts and financial data.”

What is ransomware-as-a-service and how is it evolving?

Ransomware attacks are becoming more frequent and costlier—breaches caused by ransomware grew 41 percent in the last year, the average cost of a destructive attack rising to $5.12 milllion. What’s more, a good chunk of the cyber criminals doing these attacks operate on a ransomware-as-a-service (RaaS) model.

RaaS is not much different, in theory, from the software-as-a-service (SaaS) business model, where cloud providers “rent out” their technology to you on a subscription basis—just swap out ‘cloud providers’ with ‘ransomware gangs’ and ‘technology’ with ransomware (and the related crimes involved). 

In this post, we’ll talk more about how RaaS works, why it poses a unique threat to businesses, and how small-and-medium-sized (SMBs) businesses can prepare for the next generation of RaaS attacks.

How does ransomware-as-a-service work?

How ransomware-as-a-service changed the game

Why ransomware-as-a-service attacks are so dangerous

Is ransomware here to stay? The evolution of RaaS attacks

How SMBs can protect themselves against next-gen RaaS

The perfect one-two combo for fighting RaaS

easset upload file8046 242885 e

How does ransomware-as-a-service work?

Don’t get it twisted: RaaS gangs aren’t your run-of-the-mill hackers looking to score a few hundred bucks. We’re talking big, sophisticated businesses with up to a hundred employees—LockBit, BlackBasta, and AvosLocker are just a few of the RaaS gangs we cover in our monthly ransomware review.

“This is run as a business,” says Mark Stockley, Security Evangelist at Malwarebytes. “You’ve got developers, you’ve got managers, you’ve got maybe a couple of levels of people doing the negotiations, things like that. And these gangs have made hundreds of millions of dollars each year in the last few years.”

RaaS gangs like LockBit make money by selling “RaaS kits” and other services to groups called affiliates who actually launch the ransomware attacks. In other words, affiliates don’t need crazy technical skills or knowledge to carry out attacks. By working closely with “Initial Access Brokers” (IABs), some RaaS gangs can even offer affiliates direct access into a company’s network.

How ransomware-as-a-service changed the game

Let’s jump back to the year 2015. These were the “good ol’ days” where ransomware attacks were automated and carried out on a much smaller scale. 

Here’s how it went: somebody would send you an email with an attachment, you double-clicked on it, and ransomware ran on your machine. You’d be locked out of your machine and would have to pay about $300 in Bitcoin to get it unlocked. Attackers would send out loads of these emails, lots of people would get encrypted, and lots of people would pay them a few hundred bucks. That was the business model in a nutshell. 

But then ransomware gangs sniffed out a golden opportunity. 

Rather than attacking individual endpoints for chump change, they realized they could target organizations for big money. Gangs switched from automated campaigns to human-operated ones, where the attack is controlled by an operator. In human-operated attacks, attackers try hard to wedge themselves into a network so that they can move laterally throughout an organization. 

At the forefront of this evolution from automated ransomware to human-operated ransomware attacks are ransomware-as-a-service gangs—and their new business model seems to be paying off: in 2021, ransomware gangs made at least $350 million in ransom payments.

Why ransomware-as-a-service attacks are so dangerous

The fact that RaaS attacks are human-operated means that ransomware attacks are more targeted than they used to be—and targeted attacks are far more dangerous than un-targeted ones. 

In targeted attacks, attackers spend more time, resources, and effort to infiltrate a businesses network and steal information. Such attacks often take advantage of well-known security weaknesses to gain access, with attackers spending days to even months burrowing themselves in your network. 

The human-operated element of RaaS attacks also means that RaaS affiliates can control precisely when to launch an attack—including during times where organizations are more vulnerable, such as on holidays or weekends.

Famously, RaaS affiliates love long weekends,” Stockley said. “They want to run the ransomware when you’re not going to notice to give themselves however much time they need in order for the encryption to complete. So they like to do it at nighttime, they love to do it during holidays.”

“You’re dealing with a person,” Stockley continued. “It’s not about software running trying to figure everything out; it’s a person trying to figure everything out. And they’re trying to figure out what’s the best way to attack you.”

Is ransomware here to stay? The evolution of RaaS attacks

One of the biggest innovations in the RaaS space in recent years has been the use of double extortion schemes, where attackers steal data before encryption and threaten to leak it if the ransom isn’t paid. 

Companies have gotten more aware of ransomware and better prepared in terms of things like backups, for example. But if affiliates have already broken into your environment, they can simply use stolen data as extra leverage, leaking bits of it to get your attention, to speed up negotiations, or prove what kind of access they have.   

All of the RaaS gangs these days do double extortion, leaking data on dedicated leak websites on the dark web. Many RaaS programs even feature a suite of extortion support offerings, including leak site hosting. Not only is this trend growing, but there’s chatter about whether or not stand-alone data leaking is the next stage in evolution for RaaS. 

“There are now gangs that only do data leaking, and they don’t bother doing the encryption at all,” Stockley said. “Because it’s sufficiently successful. And you don’t have to worry about software, you don’t have to worry about software being detected, you don’t have to worry about it running.”

easset upload file18859 242885 eA LockBit data leak site. Source.

In other words, the evolution from “ransomware-focused” RaaS to “leaking-focused” RaaS means that businesses need to rethink the nature of the problem: It’s not about ransomware per se, it’s about an intruder on your network. The really dangerous thing is turning out to be the access, not the ransomware software itself. 

How SMBs can protect themselves against next-gen RaaS

Preparing for RaaS attacks isn’t any different from preparing for ransomware attacks in general, and advice isn’t going to vary all that much across different sized businesses or industries. Because next-gen RaaS is so focused on intrusion, however, SMBs have their own unique challenges in combating it. 

Monitoring a network 24/7 for signs of a RaaS intrusion is tough work, period, let alone for organizations with shoe-string budgets and barely any security staff. Consider the fact that, when a threat actor breaches a target network, they don’t attack right away. The median number of days between system compromise and detection is 21 days.

By that time, it’s often too late. Data has been harvested or ransomware has been deployed. In fact, 23 percent of intrusions lead to ransomware, 29 percent to data theft, and 30 percent to exploit activity—when adversaries use vulnerabilities to initiate further intrusions.

Even with tools such as EDR, SIEM, and XDR, sifting through alerts and recognizing Indicators of Compromise (IOCs) is the work of seasoned cyber threat hunters—talent that SMBs just can’t afford. That’s why investing in Managed Detection and Response (MDR) is hugely beneficial for SMBs looking to get a leg-up against RaaS attacks. 

“Obviously, the most cost effective thing is to not let people in in the first place. And this is why things like patching, two-factor authentication, and multi-vector Endpoint Protection (EP) are so important,” Stockley said. “But at the point where they’ve broken in, then you want to detect them before they do anything bad. That’s where MDR comes in.”

The perfect one-two combo for fighting RaaS 

Human-operated, targeted, and easy to execute, RaaS attacks are a dangerous evolution in the history of ransomware. 

Double-extortion tactics, where attackers threaten to leak stolen data to the dark web, are another important evolutionary stage of RaaS campaigns today—to the point where ransomware itself might become obsolete in the future. As a result, SMBs should focus their anti-RaaS efforts on intruder detection with MDR, in addition to implementing ransomware prevention and resilience best practices.

More resources

Get the eBook: Is MDR right for my business?

Top 5 ransomware detection techniques: Pros and cons of each

Cyber threat hunting for SMBs: How MDR can help

A threat hunter talks about what he’s learned in his 16+ year cybersecurity career

Fake Proof-of-Concepts used to lure security professionals

Researchers from the Leiden University published a paper detailing how cybercriminals are using fake Proof-of-Concepts (PoCs) to install malware on researchers’ systems. The researchers found these fake PoCs on a platform where security professionals would usually expect to find them—the public code repository GitHub.

Use of PoCs

There is a big difference between knowing that a vulnerability exists and having a PoC available. If someone else has already put in the work of figuring out how a new vulnerability can be weaponized, it allows you to put it to the test, which is certainly not done to make the life of cybercriminals easier.

Security professionals are interested in PoCs because it gives them a better understanding about vulnerabilities. PoCs also offer the opportunity to see if certain mitigation techniques or updates solve the problem. They can also be used for red teaming to demonstrate the possible impact of successful attacks.

Investigation

The researchers investigated PoCs shared on GitHub for known vulnerabilities discovered between  2017 and 2021. They found that 4,893 malicious repositories out of the 47,313 repositories they downloaded and checked qualified as malicious. The qualification was based on calls to known malicious IP addresses, encoded malicious code, or the presence of Trojanized binaries. That is more than 10 percent of the samples the researchers checked.

Other sources

More reputable sources for PoCs like Exploit-DB try to validate the effectiveness and legitimacy of PoCs. In contrast, public code repositories like GitHub do not have such a exploit vetting process. But if a researcher is looking for a PoC based on a particular vulnerability and they can’t find it on a more reputable source they will have to resort to public platforms.

Indicators

Since it is an impossible task to do a detailed analysis of many thousands of PoCs, the researchers had to decide on certain indicators to establish whether a PoC was in fact malicious. Not an easy task since the behavior of a PoC to exploit a vulnerability might be detected as malicious by most anti-malware solutions. So, the researchers had to identify properties that indicate some other malicious goals, unrelated to the original PoC goals.

They did this by looking for the following indicators:

  • IP addresses: The researchers extracted IP addresses and removed all private IP addreses. The results were compared with VirusTotal, AbuseIPDB, and other publicly available blocklists.
  • Binaries, focused on EXE files which can be run on Windows systems, since most of malware attacks are conducted against Windows users. After extracting them, the researchers checked their hashes in VirusTotal and from those detected as malicious, dismissed the ones listed as an exploit of the target vulnerability.
  • Obfuscated payloads: By performing hexadecimal and base64 analysis, the researchers were able to extract some extra malicious PoCs.

For a full explanation of their methodology, we encourage you to read their full paper.

Conclusions

Out of 47,313 GitHub repositories with PoCs, the researchers detected 4,893 malicious repositories (i.e., 10.3 percent).  Inside some of these malicious PoCs they found instructions to open backdoors or plant malware in the system that is running on it. This means that these PoCs are indeed targeting the security service community, which leads to targeting every customer of such security company using these PoCs from GitHub. The results also show that malicious repositories are on average more similar to each other than non-malicious ones, which may lead to improved methods for further research.

Maintenance Mode aims to keep phone data private during repairs

One of the biggest data related headaches you’ll face with a mobile device is what do to in the event of a repair. When you have to send your phone in for a fix, what happens to your data? In many cases, the repair technicians will simply scrub the phone by default unless you ask them not to. In cases of the latter, though, how do you keep everything safe? You have no guarantee that the technician won’t sneak a peek at files, folders, passwords, logins, your browsing history…you name it, it’s on there.

A timeless problem, and one often met with a resigned sigh, a backup, and a pre-repair phone wipe “just in case”. It’s a reasonable concern. Even if it is very unlikely that the person doing the fixing is remotely interested in your day-to-day life, you’re still trusting your personal data and private information in the hands of a complete stranger.

New solutions are being applied to this incredibly common, yet oddly invisible tech problem in the form of Samsung’s new “maintenance mode.”

From repair to maintenance

You may have heard of this new mode by another name. Back in July when word first spread, it was known as repair mode. Anyone digging into the Battery and Device Care options would see a new option to make all of your personal info, apps, and files invisible to the tech looking at the phone. At the time, this option was only available on specific models and also only in South Korea. Similarly, it was assumed this new option would roll out to other regions and devices.

Sure enough, this has proven to be the case and we now have a slow global rollout of this new privacy retaining addition. We also have a name change, in the form of Maintenance Mode, and some more details as to how it operates.

How does Maintenance Mode work?

When activated, “Maintenance Mode” essentially creates a temporary, disposable user account on the phone. Access to everything on there previously is restricted for as long as someone else has hold of your device. From the new mode’s splash screen:

“In maintenance mode, your personal data including pictures, messages, and accounts, can’t be accessed and only preinstalled apps can be used. You’ll need to unlock your phone to turn off maintenance mode. When you do, everything will go back to the way it was when maintenance mode was first turned on. Changes made while maintenance mode is on, such as downloaded data or settings changes, aren’t saved. Back up your data.”

This last line is good advice. You should always back your data up anyway before handing over your phone, just in case it can’t be fixed. It’s also likely that some people may mistake Maintenance Mode as an additional way of backing up data as opposed to “just” shielding it from prying eyes, so this messaging is entirely worth it.

To use or not to use

Regardless of new tech features for your device, you should always weigh the pros and cons of handing something over with personal details on it, versus just backing up and wiping. New and cool privacy features tend to take a bit of a tech grilling as more people see what they can and can’t do with them. If you’re worried about someone figuring out a way to exploit maintenance mode, for example, you may want to just wait a while and see if anything untoward happens first. Again, while this is probably a minor risk for most people, awful people do awful things with your private data if they feel like they can get away with it.

For everyone else, this might be a new phone addition which goes some way to easing a data deletion headache. It’s definitely no fun to reinstall and reauthorize a whole mobile ecosystem when you get your device back. Perhaps this tips the fatigue odds a little bit back in your favor.

Medibank customers’ personal data compromised by cyber attack

Australian health care insurance company Medibank confirmed that the threat actor behind a cyberattack on the company had access to the data of at least 4 million customers.

Although Medibank at first said that there was “no evidence that customer data has been accessed,” a week later their investigation shows that the threat actor had access to all Medibank customers’ personal data and significant amounts of health claims data.

Stolen data

The cybercrime investigation shows that the criminal had access to:

  • All ahm customers’ personal data and significant amounts of health claims data
  • All international student customers’ personal data and significant amounts of health claims data
  • All Medibank customers’ personal data and significant amounts of health claims data

This does not necessarily mean that all these data have been stolen, but Medibank has been contacted by the threat actor claiming to have stolen 200GB of data. They provided a sample of records for 100 policy records which are believed to come from the ahm and international student systems.

The provided data sample includes first names and surnames, addresses, dates of birth, Medicare numbers, policy numbers, phone numbers and some claims data. It also includes the location of where a customer received medical services, and codes relating to their diagnosis and procedures.

The claim that the attackers have stolen other information, including data related to credit card security, has not yet been verified.

Not just current customers

Medibank has promised it will commence making direct contact with the affected customers to inform them of this latest development, and to provide support and guidance on what to do next. There may be some surprises, because not all affected people are current customers. Australian law required Medibank to hold onto past customers’ data, which was why former clients could be caught out by this breach. Relevant laws in the country require the company to keep the health information of adults for at least seven years and for individuals younger than 18 until that individual is at least 25 years old.

What to do?

Medibank and ahm customers can contact Medibank by phone (for ahm customers 13 42 46 and for Medibank customers 13 23 31) or visit the information page on the website for any updates.

Until the investigation has verified the full extent of the stolen data, it is hard to establish whether your data have been stolen. So far it has been confirmed international students have been affected. Of which there are many, since private health insurance is a requirement when they start a study in Australia.

Medibank provides comprehensive support package for customers who have had their data stolen which includes:

  • Financial support for customers who are in a uniquely vulnerable position as a result of this crime. They will be supported on an individual basis.
  • Free identity monitoring services for customers who have had their primary ID compromised
  • Reimbursement of fees for re-issue of identity documents that have been fully compromised in this crime

And they are offering all customers access to:

  • Specialist identity protection advice and resources from IDCARE
  • Medibank’s mental health and wellbeing support line

This and any new information can be found on Medibank’s webpage about the cybersecurity incident.

As always, when personal data have been stolen it is advisable to deploy some extra vigilance when it comes to phishing attempts that could very well use some of the stolen information to gain credibility.