IT NEWS

Here’s what data the FBI can get from WhatsApp, iMessage, Signal, Telegram, and more

Not every secure messaging app is as safe as it would like us to think. And some are safer than others.

A recently disclosed FBI training document shows how much access to the content of encrypted messages from secure messaging services US law enforcement can gain and what they can learn about your usage of the apps.

The infographic shows details about iMessage, Line, Signal, Telegram, Threema, Viber, WeChat, WhatsApp, and Wickr. All of them are messaging apps that promise end-to-end encryption for their users. And while the FBI document does not say this isn’t true, it reveals what type of information law enforcement will be able to unearth from each of the listed services.

Note: A pen register is an electronic tool that can be used to capture data regarding all telephone numbers that are dialed from a specific phone line. So if you see that mentioned below it refers to the FBI’s ability to find out who you have been communicating with.

iMessage

iMessage is Apple’s instant messaging service. It works across Macs, iPhones, and iPads. Using it on Android is hard because Apple uses a special end-to-end encryption system in iMessage that secures the messages from the device they’re sent on, through Apple’s servers, to the device receiving them. Because the messages are encrypted, the iMessage network is only usable by devices that know how to decrypt the messages. Here’s what the document says it can access for iMessage:

  • Message content limited.
  • Subpoena: Can render basic subscriber information.
  • 18 USC §2703(d): Can render 25 days of iMessage lookups and from a target number.
  • Pen Register: No capability.
  • Search Warrant: Can render backups of a target device; if target uses iCloud backup, the encryption keys should also be provided with content return. Can also acquire iMessages from iCloud returns if target has enabled Messages in iCloud.

Line

Line is a freeware app for instant communications on electronic devices such as smartphones, tablets, and personal computers. In July 2016, Line Corporation turned on end-to-end encryption by default for all Line users, after it had earlier been available as an opt-in feature since October 2015. The document notes on Line:

  • Message content limited.
  • Suspect’s and/or victim’s registered information (profile image, display name, email address, phone number, LINE ID, date of registration, etc.)
  • Information on usage.
  • Maximum of seven days’ worth of specified users’ text chats (Only when end-to-end encryption has not been elected and applied and only when receiving an effective warrant; however, video, picture, files, location, phone call audio and other such data will not be disclosed).

Signal

Signal is a cross-platform centralized encrypted instant messaging service. Users can send one-to-one and group messages, which can include files, voice notes, images and videos. Signal uses standard cellular telephone numbers as identifiers and secures all communications to other Signal users with end-to-end encryption. The apps include mechanisms by which users can independently verify the identity of their contacts and the integrity of the data channel. The document notes about Signal:

  • No message content.
  • Date and time a user registered.
  • Last date of a user’s connectivity to the service.

This seems to be consistent with Signal’s claims.

Telegram

Telegram is a freeware, cross-platform, cloud-based instant messaging (IM) system. The service also provides end-to-end encrypted video calling, VoIP, file sharing and several other features. There are also two official Telegram web twin apps—WebK and WebZ—and numerous unofficial clients that make use of Telegram’s protocol. The FBI document says about Telegram:

  • No message content.
  • No contact information provided for law enforcement to pursue a court order. As per Telegram’s privacy statement, for confirmed terrorist investigations, Telegram may disclose IP and phone number to relevant authorities.

Threema

Threema is an end-to-end encrypted mobile messaging app. Unlike other apps, it doesn’t require you to enter an email address or phone number to create an account. A user’s contacts and messages are stored locally, on each user’s device, instead of on the server. Likewise, your public keys reside on devices instead of the central servers. Threema uses the open-source library NaCl for encryption. The FBI document says it can access:

  • No message content.
  • Hash of phone number and email address, if provided by user.
  • Push Token, if push service is used.
  • Public Key
  • Date (no time) of Threema ID creation.
  • Date (no time) of last login.

Viber

Viber is a cross-platform messaging app that lets you send text messages, and make phone and video calls. Viber’s core features are secured with end-to-end encryption: calls, one-on-one messages, group messages, media sharing and secondary devices. This means that the encryption keys are stored only on the clients themselves and no one, not even Viber itself, has access to them. The FBI notes:

  • No message content.
  • Provides account (i.e. phone number)) registration data and IP address at time of creation.
  • Message history: time, date, source number, and destination number.

WeChat

WeChat is a Chinese multi-purpose instant messaging, social media and mobile payment app. User activity on WeChat has been known to be analyzed, tracked and shared with Chinese authorities upon request as part of the mass surveillance network in China. WeChat uses symmetric AES encryption but does not use end-to-end encryption to encrypt users messages. The FBI has less access than the Chinese authorities and can access:

  • No message content.
  • Accepts account preservation letters and subpoenas, but cannot provide records for accounts created in China.
  • For non-China accounts, they can provide basic information (name, phone number, email, IP address), which is retained for as long as the account is active.

WhatsApp

WhatsApp, is an American, freeware, cross-platform centralized instant messaging and VoIP service owned by Meta Platforms. It allows users to send text messages and voice messages, make voice and video calls, and share images, documents, user locations, and other content. WhatsApp’s end-to-end encryption is used when you message another person using WhatsApp Messenger. The FBI notes:

  • Message content limited.
  • Subpoena: Can render basic subscriber records.
  • Court order: Subpoena return as well as information like blocked users.
  • Search warrant: Provides address book contacts and WhatsApp users who have the target in their address book contacts.
  • Pen register: Sent every 15 minutes, provides source and destination for each message.
  • If target is using an iPhone and iCloud backups enabled, iCloud returns may contain WhatsApp data, to include message content.

Wickr

Wickr has developed several secure messaging apps based on different customer needs: Wickr Me, Wickr Pro, Wickr RAM, and Wickr Enterprise. The Wickr instant messaging apps allow users to exchange end-to-end encrypted and content-expiring messages, including photos, videos, and file attachments. Wickr was founded in 2012 by a group of security experts and privacy advocates but was acquired by Amazon Web Services. The FBI notes:

  • No message content.
  • Date and time account created.
  • Type of device(s) app installed on.
  • Date of last use.
  • Number of messages.
  • Number of external IDs (email addresses and phone numbers) connected to the account, bot not to plaintext external IDs themselves.
  • Avatar image.
  • Limited records of recent changes to account setting such as adding or suspending a device (does not include message content or routing and delivery information).
  • Wickr version number.

Conclusion

If there is one thing clear from the information in this document it’s that most, if not all, of your messages are safe from prying eyes in these apps, unless you’re using WeChat in China. Based on the descriptions, you can check out which apps are available on your favorite platform and which of the bullet points are relevant to you, to decide which app is a good choice for you.

The safest way however is to make sure the FBI doesn’t consider you a person of interest. In those cases even using a special encrypted device can pose some risks.

Stay safe, everyone!

The post Here’s what data the FBI can get from WhatsApp, iMessage, Signal, Telegram, and more appeared first on Malwarebytes Labs.

Capcom Arcade Stadium’s record player numbers blamed on card mining

Some of my favourite retro video games are making waves on Steam, but not in the way you might think. Classics such as Strider, Ghosts n’ Goblins, and more are all available as content for Capcom Arcade Stadium. This is an emulator which lets you play 31 arcade games from the 80s/90s. The games themselves are paid downloadable content, but the main emulator download itself is free. It also comes with one free game as a taster of the full edition.

It didn’t have a great reception at launch, because people didn’t like titles being sold in bundles only. As such, it was something of a surprise to see it riding high at the top end of the Steam activity charts in the last few days.

Sure, the games can now be bought individually. But would that really equate to an all-time concurrent tally of 481,088 players? Did people really wake up this week and think “What we need in our lives is 3 different versions of Street Fighter 2”?

The numbers game

Make no mistake, these are some of the biggest numbers you can achieve on Steam and it typically requires a massive AAA+ title to achieve it. For example, right now the three top played games on Steam are:

  1. Counter Strike: Global Offensive with 507,995 players
  2. Dota 2 with 325,679 players
  3. PUBG: Battlegrounds with 150,498 players

These are all huge online games, played against other people. Yet somehow we have the archaic arcade emulator, with its one free game by default, storming into the top three.

These numbers are so vast, Capcom Arcade Station has managed to hit 8th place in the top records for most simultaneous players. What could have possibly caused this? The faithful translation of arcade controls to gamepads? The ability to rewind the game should you make a mistake? Customising the individual game’s arcade cabinet before loading up a title?

Nope, it’s bots.

How did bots cause the great player count inflation of 2021?

Generally when we talk about bots in gaming, we mean hacked accounts or PCs performing certain tasks. It could be a DDoS attack, or sending out phishing messages inside game chat, or some other nefarious activity.

In this case, the “bot” is something a little bit different. It’s not something caused by what happens inside the game itself. Rather, it’s a layer of virtual economy and digital goods driving what happens to the player count.

Before we get to the nitty gritty, it’s time to explain the ins and outs of Steam card trading.

Steam card trading

Sometimes folks get confused on this, so for clarity, there’s two types of “Steam card”. The first is an actual, physical gift card you can buy in stores. These cards have monetary amounts assigned to them, and they’re a way to preload your Steam account with credit. You then use it to purchase games from the store. You can also buy “digital” versions of these cards which perform the same function.

The other type of card, the one we’re focusing on, are Steam trading cards. These are items which are tied to certain games, but don’t exist within them. They’re essentially cool looking virtual cards with characters from the game on them. The more you play a game, the more likely the chance you’ll be given a free card drop. When you collect all of the cards for a game, you can create a badge for your Steam profile. At the same time, you’ll be given other community-centric items like emoticons, the possibility of discount coupons for other games, or even the option to bump up your Steam level (another profile feature).

The system is designed so you can’t just grab all of the cards by playing. There’s a limit on how many you’ll receive and then you need to get the rest by trading with friends, or buying from the Steam marketplace.

This is a very detailed system with a lot of depth to it. Steam trading is big business, and often one of the focal points for scams, phishes, and malware antics. However, that’s not the case here.

Rather, it appears to be users trying to game the system for their own ends. For once, nobody is compromising accounts and running off with a sack full of stolen logins.

Still, this begs the question: what is happening here?

The wonderful world of card mining

It’s not just Bitcoin hogging all the space in the mines these days. Steam cards can also be mined, and there’s a surprising number of tools available to do it. One of the most popular is something called ArchiSteamFarm. This is a third party tool you can log into with your Steam credentials, and it’ll tell you what can/can’t be farmed. If there are card drops available, you simply tell it which games of yours to “idle” on and we’re off to the card mining races. You don’t have to download the game in order to idle, which makes it super convenient for people wanting cards without gigabytes of downloads and wasted hard drive space.

This is where things get really interesting.

Steam cards usually only drop for paid titles. If you don’t buy the game, you can’t get cards. In this case, the base Capcom Arcade Simulator game is a free download with one free title included. This isn’t (and shouldn’t!) be enough to have cards start dropping.

However, something seems to have gone wrong. All of a sudden, people found they could obtain trading cards despite only having the free game. This meant a huge surge in botting activity to grab cards before someone at Valve—the company behind Steam—fixed it.

Screenshot 2021 12 01 at 16.25.46

As a result, a massive amount of card miners fired up their tools (whether ArchiSteamFarm or something else entirely), and idled their way to sweet card victory. As above, there’s limitations on how many cards you can farm. Once you hit the limit, that’s it – no more mining on that game ever. You have to trade or buy the rest. So this is, essentially, people just wanting to get in on the ground floor of red hot card trading action.

Watching Steam achievement totals drop in real time

As this Ars Technica article notes, you can observe clues regarding the automated action taking place. One way to do this is by checking out Steam achievement numbers. Around 44.6% of people had gained the achievement for loading up a game for the first time a couple of days ago. Now, the number sits at just 7.9%. That’s lower than the previous figure in the article. The only way this number makes sense given the massive user numbers is if huge amounts of new game owners are using tools to “idle” while prospecting for card drops.

The leaky card pipeline has apparently been fixed, so no amount of idling will produce any more cards. This happens to games occasionally, most notably when an error caused card drops for Life is Strange 2. What usually happens after an incident like this is the market is flooded and card value plummets, so it’s probably a fraught time on the old trading card stock exchange or something.

When retro revivals are no more…

Unfortunately my dreams of a Strider revival off the back of massive player numbers and a sudden boom in retro gaming now seems unlikely. On the bright side, the peculiar rise in player numbers didn’t involve people up to no good with malware or phishing.

While Valve probably won’t be too pleased by the inadvertent rush on cards, that is at least one small mercy we can be thankful for.

The post Capcom Arcade Stadium’s record player numbers blamed on card mining appeared first on Malwarebytes Labs.

Massive faceprint scraping company Clearview AI hauled over the coals

Life must be hard for companies that try to make a living by invading people’s privacy. You almost feel sorry for them. Except I don’t.

The UK’s Information Commissioner’s Office (ICO)—an independent body set up to uphold information rights—has announced its provisional intent to impose a potential fine of just over £17 million (roughly US$23 million) on Clearview AI.

In addition, the ICO has issued a provisional notice to stop further processing of the personal data of people in the UK and delete what ClearviewAI has, following alleged serious breaches of the UK’s data protection laws.

What is Clearview AI?

Clearview AI was founded in 2017, and started to make waves when it turned out to have created a groundbreaking facial recognition app. You could take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared.

According to its own website, Clearview AI provides a “revolutionary intelligence platform”, powered by facial recognition technology. The platform includes a facial network of 10+ billion facial images scraped from the public internet, including news media, mugshot websites, public social media, and other open sources.

Yes, scraped from social media, which means that if you’re on Facebook, Twitter, Instagram or similar, then your face may well be in the database.

Clearview AI says it uses its faceprint database to help law enforcement fight crimes. Unfortunately it’s not just law enforcement. Journalists uncovered that Clearview AI also licensed the app to at least a handful of private companies for security purposes.

Clearview AI ran a free trial with several law enforcement agencies in the UK, but these trials have since been terminated, so there seems to be little reason for Clearview to hold on to the data.

And worried citizens that wish to have their data removed, which companies have to do upon request under the GDPR, are often required to provide the company with even more data, including photographs, to be considered for removal.

It’s not just the UK that’s worried. Earlier this month, the Office of the Australian Information Commissioner (OAIC) ordered Clearview AI to stop collecting photos taken in Australia and remove the ones already in its collection.

Offenses

The ICO says that the images in Clearview AI’s database are likely to include the data of a substantial number of people from the UK and these may have been gathered without people’s knowledge from publicly available information online.

The ICO found that Clearview AI has failed to comply with UK data protection laws in several ways, including:

  • Failing to process the information of people in the UK in a way they are likely to expect or that is fair
  • Failing to have a process in place to stop the data being retained indefinitely
  • Failing to have a lawful reason for collecting the information
  • Failing to meet the higher data protection standards required for biometric data (classed as ‘special category data’ under the GDPR and UK GDPR)
  • Failing to inform people in the UK about what is happening to their data
  • And, as mentioned earlier, asking for additional personal information, including photos, which may have acted as a disincentive to individuals who wish to object to their data being processed

Clearview AI Inc now has the opportunity to make representations in respect of these alleged breaches set out in the Commissioner’s Notice of Intent and Preliminary Enforcement Notice. These representations will then be considered and a final decision will be made.

As a result, the proposed fine and preliminary enforcement notice may be subject to change or there will be no further formal action.

There is some hope for Clearview AI if you look at past fines imposed by the ICO.

Marriot was initially expected to receive a fine of 110 million Euros after a data breach that happened in 2014 but wasn’t disclosed until 2018, but Marriot ended up having to pay “only” 20 million Euros. We can expect to hear a final decision against Clearview AI by mid-2022.

Facial recognition

A facial recognition system is a technology capable of matching a human face from a digital image or a video frame against a database of faces, typically employed to identify and/or authenticate users.

Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. Police departments have had access to facial recognition tools for almost 20 years, but they have historically been limited to searching government-provided images, such as mugshots and driver’s license photos.

Gathering images from the public internet obviously makes for a much larger dataset, but it’s not the intention with which the images were posted.

It’s because of the privacy implications that some tech giants have backed away from the technology, or halted their development. Clearview AI clearly is not one of them. Neither a tech giant nor a company that cares about privacy.

The post Massive faceprint scraping company Clearview AI hauled over the coals appeared first on Malwarebytes Labs.

CronRAT targets Linux servers with e-commerce attacks

There’s an interesting find over at the Sansec blog, wrapping time and date manipulation up with a very smart RAT attack.

The file, named CronRAT, isn’t an e-commerce attack compromising payment terminals in physical stores. Rather, it looks to swipe payment details by going after vulnerable web stores and dropping payment skimmers on Linux servers. It’s your classic Magecart attack with a stealthy twist.

This method means it bypasses the protection people using the websites arm themselves with, rigging the game from the start. By the time you get onto the website, everything may be fine at your end but the stream further up river has already been polluted. It achieves this thanks to the Linux Cron Job system, which we’ll come back to a little later.

First of all, here’s a brief rundown on what Magecart is, and the difference between client-side and server-side attacks.

What is Magecart?

It’s the collective used for multiple groups who partake in web skimming. These attacks rely on outdated CMSes, or plugin zero days. They may go after small businesses running a particular e-commerce platform. It’s possible they use services like bulletproof hosting to frustrate researchers and law enforcement. Web shells are a popular tactic. There are even impersonators out there, just to make things even more confusing.

Client-side versus server-side attacks

Client-side is where the people who buy things from websites hang out. These are the places where operations such as Magecart may lurk. It could be bogus JavaScript loading in from untrusted domains, or perhaps some other form of rogue code. You can ward off threats such as these by using browser plugins like NoScript. There’s an element of control over these factors, in terms of how you try and secure your browser.

Server-side is an attack on the merchants. Your security processes and tools are great, but when someone is directly corrupting the site under the hood, you may be fighting a lost battle. While your typical web shopper’s first run-in with Magecart would be the previously mentioned rogue JavaScript or other code, this attack means browser-based fixes may not help.

With those out of the way, we’ll loop back to Cron and Cron Jobs.

What is Cron?

Cron is a way that people running a Linux system can schedule tasks. Those tasks will run at a specified time/date in the future, and are known as Cron Jobs. Where things get interesting is that you can enter any date you like, even ones which don’t exist. As long as the system accepts your input, it’ll take it on board and file away in the scheduling system.

CronRAT adds various tasks to the cron table, with a date specification that’ll generate run time errors when triggered. What the malware authors have done is take advantage of the “any date can be used” functionality, and assigned them to February 31st. Of course, this is a date which doesn’t actually exist. As a result, the errors will never happen.

As Sansec puts it:

…the actual malware code is hidden in the task names and is constructed using several layers of compression and base64 decoding.

The payload is a “sophisticated bash program that features self-destruction, timing modulation and a custom binary protocol to communicate with a foreign control server.”

This is definitely one way for Magecart to make waves over the Black Friday period and also further still into the Christmas season.

The problem of digital skimming

Here’s some thoughts from Jerome Segura, our Senior Director of Threat Intelligence:

We’ve known for a long time that there are two different ecosystems when it comes to website security: server-side and client-side. While most security companies focus on the latter, the former is probably the more interesting and perhaps less documented one as it requires access to backend systems. This is an example of a threat that is well crafted and meant to evade detection by default browser-side, but also in some aspects server-side due to its clever obfuscation techniques.

What that means from a digital skimming standpoint is that you are always accepting a level of risk by shopping online and placing trust in the merchant’s ability to keep their systems safe. You should be aware of any subtle changes in payment forms and other possible giveaways that a website is not up to par. Without getting too technical, certain things like outdated copyright information or broken HTML elements may be an indication that the store is not keeping their site up to date.

An attacker will first compromise online shops that are vulnerable to attacks, so it makes sense to stay clear of those that are not following best practices.

Safety first

There’s lots of things you can do out there in the real world to avoid ATM skimmers, and related threats. You can also be proactive in the realm of web-based skimmers targeting the sites you make payments on. Issues such as CronRAT may take a little while longer for various industries to figure out.

While there are varying levels of protection for web purchases, it may be dependent on payment method and/or location. It’s also not great to know that if payment data has been compromised, it’s possible the criminals have grabbed other data too. While this may not be the most reassuring message to take into the new year, forewarned is most definitely forearmed.

The post CronRAT targets Linux servers with e-commerce attacks appeared first on Malwarebytes Labs.

Hackers all over the world are targeting Tasmania’s emergency services

Emergency services—under which the police, fire, and emergency medical services departments fall—is an infrastructure vital to any country or state. But when those services come under threat from either physical or cyber entities, it’s as good as putting the lives of citizens at risk as well.

Unfortunately, not every place has the means and manpower like the US to put pressure on cybercriminals who dare target their vital infrastructures. And this is probably why some threat actors would rather take their chances targeting other countries for profit.

As a case in point, the island state of Tasmania in Australia continues to be subjected to multiple cyberattacks on its emergency services from all around the globe.

Hackers have tried breaking into Tasmania Police employee accounts over 800 times in the last 12 months, according to an internal report from the Department of Police, Fire and Management (DPFEM) that was obtained by ABC News Australia.

And that’s just the tip of the iceberg. The report also revealed:

  • CCTV cameras have been compromised
  • A section of the Tasmania Fire Service website was taken over by one or more unknown parties for at least two weeks
  • Two-factor authentication (2FA) was defeated in five occasions on devices owned by DPFEM employees

The DPFEM is said to store and maintain personally identifiable data and classified information, which makes it a goldmine for hackers. If it was ever completely compromised, DPFEM said it won’t be able to bounce back as quickly as the Federal Group, Tasmania’s casino operator that fell victim to a ransomware attack from the DarkSide hacking group, did.

“Unlike Federal Group, DPFEM will not be able to recover its entire business operation in under six weeks, even with external assistance, because its Information Security Program is not mature enough to determine the full extent of a system compromise and, therefore, will be required to take all its systems back to bare metal to ensure environmental integrity,” the report said.

The report recommended that the Tasmania Police and Fire Service should invest an $550,221 annually to “keep the department cyber safe.”

The post Hackers all over the world are targeting Tasmania’s emergency services appeared first on Malwarebytes Labs.

A week in security (Nov 22 – Nov 28)

Last week on Malwarebytes Labs

Stay safe!

The post A week in security (Nov 22 – Nov 28) appeared first on Malwarebytes Labs.

ICO challenges adtech to step up privacy protection

The UK Information Commissioner’s Office (ICO) wants the advertising industry to come up with new initiatives that address the risks of adtech, and take account of data protection requirements from the outset.

The ICO is an independent body set up to uphold information rights. The technology that is currently in use by the advertising industry has the potential to be highly privacy intrusive. And the ICO has the right to issue, on initiative or on request, opinions to Parliament, government, other institutions or bodies, and the public, on any issue related to the protection of personal data.

The problem

The concept is simple: Advertisers want to show adverts to individuals who are likely to buy their product, and consumers prefer to see adverts that are relevant to them over those that are not. To accomplish this, the advertising industry has come up with a complex web of data processing which includes profiling, tracking, auctioning, and sharing of personal data.

That approach leads to advertisers knowing far more about people than they need to, and having to store and secure all that data.

Moves in the right direction

In recent years, the ad industry has developed several initiatives for less intrusive technology to address these privacy risks. These include proposals from Google and other market participants to phase out the use of third-party cookies, and other forms of cross-site tracking, and replace them with alternatives.

Federated Learning of Cohorts (FLoC) is one of the initiatives by Google that aims to thread the needle of offering people targeted ads while respecting their privacy. That initiative got off to a bad start when it became known that Google had quietly added millions of Chrome users to a FLoC pilot without asking them.

Other recent developments highlighted by the ICO include:

  • Proposals like FLoC, that aim to phase out third-party cookies and replace them with alternatives.
  • Increases in the transparency of online tracking, such as Apple’s App Tracking Transparency, which has had a notable impact—both in terms of the number of users exercising control over tracking, as well as the market itself.
  • Mechanisms to enable individuals to indicate their privacy preferences in simple and effective ways.
  • Developments by browser developers to include tracking prevention in their software.

As an example of the last point, Enhanced Tracking Protection in Firefox automatically blocks trackers that collect information about your browsing habits and interests. But this is not  as effective as you might hope. Blocking third-party cookies and related mechanisms does partially restrict cross-site trackers, but as long as a tracker is still being loaded in your browser, it can still track you. Not as easy as it was before, but tracking is still tracking, and the most prevalent cross-site trackers (looking at you,  Google and Facebook) are certainly still tracking you.

Google

Google’s status in the digital economy means that any proposal it puts forward has a significant impact. Not just because of the market share of its browser, but also due to the services it offers individuals and organizations, and the large role it plays in the digital advertising market.

In 2019, Google announced its vision for the Google Privacy Sandbox. The building blocks for this were essentially:

  • Most aspects of the web need money to survive, and advertising that relies on cookies is the dominant revenue stream.
  • Blocking ads or cookies can prevent advertisers from generating revenue, threatening #1.
  • If you block easily controllable methods like cookies, advertisers may turn to other techniques, like fingerprinting, that are harder for users to control.

Expectations

The ICO is attempting to insert itself into the rapidly evolving situation around adtech by means of a recently published opinion:

There is a window of opportunity for proposal developers to reflect on genuinely applying a data protection by design approach. The Commissioner therefore encourages Google and other participants to demonstrate how their proposals meet the expectations this opinion outlines.

The ICO is encouraging Google and other advertisers to demonstrate new proposals that can meet a set of expectations set out in the Opinion. It wants to see proposals to remove the use of technologies that lead to intrusive and unaccountable processing of personal data and device information, which increases the risks of harm to individuals.

The ICO says it expects any proposal to:

  • Engineer data protection requirements by default into the design of the initiative.
  • Offer users the choice of receiving adverts without tracking, profiling or targeting based on personal data.
  • Be transparent about how and why personal data is processed, and who is responsible for that processing.
  • Articulate the specific purposes for processing personal data
  • …and demonstrate how this is fair, lawful and transparent.
  • Address existing privacy risks and mitigate any new privacy risks that their proposal introduces.

As the ICO does, we are looking forward to more privacy focused ways of delivering targeted advertising.

The post ICO challenges adtech to step up privacy protection appeared first on Malwarebytes Labs.

New law will issue bans, fines for using default passwords on smart devices

The idea of connecting your entire home to the internet was once a mind-blowing concept. Thanks to smart devices, that concept is now a reality. However, this technological advancement aimed at making our lives more convenient—not to mention very cool and futuristic!—has also opened a wide door for potential cybercriminals.

New figures from a recent investigation conducted by Which?, the UK’s leading consumer awareness and review site, say that smart devices could be exposed to over 12,000 hacking and unknown scanning attacks in a single week. And smart devices are big news—a study commissioned by the UK government in 2020 revealed that almost half (49 percent) of UK residents purchased at least one smart device since the pandemic started.

And because of our high propensity to forgo changing default passwords that came with the smart devices we buy, we’re essentially putting ourselves—our homes and our family’s data and privacy—at the forefront of online attacks without us knowing.

To help address this cybersecurity and privacy problem, the UK government will soon roll out the Product Security and Telecommunications Infrastructure (PSTI) Bill that bans the use of default passwords for all internet-connected devices for the home, which we all call the Internet of Things (IoT). This law covers smartphones, routers, games consoles, toys, speakers, security cameras, internet-enabled white goods (fridge, washing machine, etc.) but not vehicles, smart meters, smart medical devices, laptops, and desktop computers. Firms that don’t comply will face huge fines.

The BBC has highlighted three new rules under this bill:

  • Easy-to-guess default passwords preloaded on devices are banned. All products now need unique passwords that cannot be reset to factory default
  • Customers must be told when they buy a device the minimum time it will receive vital security updates and patches. If a product doesn’t get either, that must also be disclosed
  • Security researchers will be given a public point of contact to point out flaws and bugs

A regulator will be appointed to oversee this bill once fully enforced. They will also have the power to fine manufacturers of vulnerable smart devices and the markets that sell them (Amazon, for example) up to £10M GBP or 4% of their global earnings. They can also impose an additional fine of £20,000 a day if the company continues to be in violation with the law.

“This is just the first step”

Julia Lopez, the Minister of State at the Department for Digital, Culture, Media and Sport, said: “Our bill will put a firewall around everyday tech from phones and thermostats to dishwashers, baby monitors and doorbells, and see huge fines for those that fall foul of tough new security standards.”

While Ken Munro, a security consultant for Pen Test Partners, told the BBC he sees the bill as a “big step in the right direction”, he also cautions about complacency, “However, it’s important that government acknowledges that this is just the first step. These laws will need continual improvement to address more complex security issues in smart devices,” he said.

The post New law will issue bans, fines for using default passwords on smart devices appeared first on Malwarebytes Labs.

Improving security for mobile devices: CISA issues guides

The Cybersecurity and Infrastructure Security Agency (CISA) has released two actionable Capacity Enhancement Guides (CEGs) to help users and organizations improve mobile device cybersecurity.

Consumers

One of the guides is intended for consumers. There are an estimated 294 million smart phone users in the US, which makes them an attractive target market for cybercriminals. Especially considering that most of us use these devices every day.

The advice listed for consumers is basic and our regular readers have probably seen most of it before. But it never hurts to repeat good advice and it may certainly help newer visitors.

  • Stay up to date. Make sure that your operating system (OS) and the apps you use are up to date, and enable automatic updating where possible.
  • Use strong authentication. Make sure to use strong passwords or pins to access your devices, and biometrics if possible and when needed. For apps, websites and services use multi-factor authentication (MFA) where possible.
  • App security:
    • Use curated app stores and stay away from apps that are offered through other channels. If they are not good enough for the curated app stores, they are probably not good for you either.
    • Delete unneeded apps. Remove apps that you no longer use, not only to free up resources, but also to diminish the attack surface.
    • Limit the amount of Personally Identifiable Information (PII) that is stored in apps.
    • Grant least privilege access to all apps. Don’t allow the apps more permissions than they absolutely need in order to do what you need them to do, and minimize their access to PII.
    • Review location settings. Only allow an app to access your location when the app is in use.
  • Network communications. Disable the network protocols that you are not using, like Bluetooth, NFC, WiFi, and GPS. And avoid public WiFi unless you can take the necessary security measures. Cybercriminals can use public WiFi networks, which are often unsecured, for attacks.
  • Protection. – Install security software on your devices. – Use only trusted chargers and cables to avoid juice jacking. A malicious charger or PC can load malware onto smartphones that may circumvent protections and take control of them. A phone infected with malware can also pose a threat to external systems such as personal computers. Enable lost device functions or a similar app. Use auto-wipe settings or apps to remove data after a certain amount of failed logins, and enable the option to remotely wipe the device.
  • Phishing protection. Stay alert, don’t click on links or open attachments before verifying their origin and legitimacy.

Organizations

The guide for organizations does duplicate some of the advice given to consumers, but it has a few extra points that we would like to highlight.

  • Security focused device management. Select devices that meet enterprise requirements with a careful eye on supply chain risks.
  • Use Enterprise Mobility Management solutions (EMM) to manage your corporate-liable, employee-owned, and dedicated devices.
  • Deny access to untrusted devices. Devices are to be considered untrusted if they have not been updated to the latest platform patch level; they are not configured and constantly monitored by EMM to enterprise standards; or they are jailbroken or rooted.
  • App security. Isolate enterprise apps. Use security container technology to isolate enterprise data. Your organization’s EMM should be configured to prevent data exfiltration between enterprise apps and personal apps.
  • Ensure app vetting strategy for enterprise-developed applications.
  • Restrict OS/app synchronization. Prevent data leakage of sensitive enterprise information by restricting the backing up of enterprise data by OS/app-synchronization.
  • Disable user certificates. User certificates should be considered untrusted because malicious actors can use malware hidden in them to facilitate attacks on devices, such as intercepting communications.
  • Use secure communication apps and protocols. Many network-based attacks allow the attacker to intercept and/or modify data in transit. Configure the EMM to use VPNs between the device and the enterprise network.
  • Protect enterprise systems. Do not allow mobile devices to connect to critical systems. Infected mobile devices can introduce malware to business-critical ancillary systems such as enterprise PCs, servers, or operational technology systems. Instruct users to never connect mobile devices to critical systems via USB or wireless. Also, configure the EMM to disable these capabilities.

While you may not feel the need to apply all the advice listed above, it is good to at least know about it and consider whether it fits into the security posture that matches your infrastructure and threat model.

Stay safe, everyone!

The post Improving security for mobile devices: CISA issues guides appeared first on Malwarebytes Labs.

Google’s Threat Horizons report: Will the straightforward approach get results?

Google’s Cybersecurity Action Team has released a Threat Horizons report focusing on cloud security. It’s taken some criticism for being surprisingly straightforward and less complex than you may expect. On the other hand, many businesses simply don’t understand many of the threats at large. Perhaps this is a way of easing the people the report is aimed at into the wider discussion.

At any rate, the report is out and I think it’s worth digging into. They may be taking the “gently does it” approach because so many of their customers are falling foul to bad things. It makes sense to keep it simple in an effort to have people pay attention and nail the basics first. After all, if they can’t do that then complex rundowns stand no chance.

Key features of the report

The executive summary lists a number of key points. There’s a strong focus on issues and concerns for people using Google services. For example:

“Of 50 recently compromised GCP instances, 86% of the compromised cloud instances were used to perform cryptocurrency mining, a cloud resource-intensive, for profit activity. Additionally, 10% of compromised cloud instances were used to conduct scans of other publicly available resources on the internet to identify vulnerable systems, and 8% of instances were used to attack other targets”.

In case you’re wondering, GCP means Google Cloud Platform.

Elsewhere, the summary mentions Google cloud resources were used to generate bogus YouTube view counts. This sounds interesting, and would probably be useful to know more about it. Unfortunately there are no details in the summary, and the full report doesn’t go into the nitty-gritty of what happened either. Given this one is a clear and easily understandable way to explain how [bad thing in cloud] equals [bad knock-on effect for service everyone you know uses], it seems strange to keep us guessing.

Google also references the Fancy Bear/APT28 Gmail phishing attack, which we covered last month. While this isn’t exactly a common concern for most people, it is good to reiterate the usefulness of multiple Google security settings. 2FA, apps, backup codes, and advanced security settings are always better to have up and running than not at all.

It’s not just Google services up for discussion…

The report also briefly branches out into other realms of concern. Bogus job descriptions posing as Samsung PDFs were deliberately malformed, leading to follow up messages containing malware lurking at the links provided by the sender.

This campaign is apparently from a North Korean government-backed group, which previously targeted security researchers. There’s also a lengthy rundown of Black Matter ransomware, and (again) various tips for Google specific cloud products in terms of keeping the Black Matter threat at arm’s length.

The full report is a PDF weighing in at 28 pages long. Yes, it’s a bit light on details. However, it’s quite possible to send people running for the hills with 80+ pages of heavy-duty security information. If people are making rudimentary mistakes, why not make a gesture of highlighting said mistakes?

Simply does it

As we heard in our recent Lock and Code episode, the basics are no laughing matter. Many organisations don’t have the time, money, or resources available. They’re unable to tackle what some would consider to be incredibly obvious issues. There’s plenty of detailed security information out there already on multiple Google pages. Maybe it’s possible that this back to basics approach will pay off in the long run.

If Google’s main concern seems to mostly be “script kiddy with a cryptominer”? Then a script kiddy with a cryptominer focus we shall have. For now, we’ll just have to wait and see what kind of uptake this new approach receives and go from there.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

The post Google’s Threat Horizons report: Will the straightforward approach get results? appeared first on Malwarebytes Labs.