IT NEWS

Signal app insists it’s so private it can’t provide subpoenaed call data

Signal—the private, end-to-end encrypted messaging app that surged in popularity in recent months—once again reminded criminal investigators that it could not fully comply with a legal request for user records and communications because of what it asserts as a simple, unchanging fact: The records do not exist on Signal’s servers.

This is at least the second request of this kind that Signal has received in the last five years, and in the same time period, similar government demands to pry apart end-to-end encrypted communications have become commonplace. Every single time the government has tried this—from the FBI’s insistence in 2016 that Apple create new software to grant access to a device, to the introduction of the EARN IT Act in Congress last year—cybersecurity experts have pushed back.

The legal request to Signal came from the US Attorney’s Office in the Central District in California in the form of a federal grand jury subpoena. According to the subpoena, investigators sought “all subscriber information” belonging to what appeared to be six Signal users. The requested information included “user’s name, address, and date and time of account creation,” the date and time that the users downloaded Signal and when they last accessed Signal, along with the content of the messages sent and received by the accounts, described in the request as “all correspondence with users associated with the above phone numbers.”

Signal responded to the subpoena with help from lawyers from American Civil Liberties Union. According to the company’s response, Signal could only comply with two categories of information requested by the US Attorney’s Office.

“The only information Signal maintains that is responsive to the subpoena’s inquiries about particular user accounts is the time of account creation and the time of the account’s last connection to Signal servers,” wrote ACLU attorneys Brett Kauffman and Jennifer Granick. Kauffman and Granick also addressed some of the US Attorney’s Office’s questions about the physical locations of Signal’s servers and whether the technical processes of account creation and communication for Signal users in California ever leave the state of California itself.  

In a blog published this week, Signal said why it again could not comply with a subpoena for user information, explaining that, because of the app’s design, such user information never reaches their hands.

“It’s impossible to turn over data that we never had access to in the first place,” the company wrote. “Signal doesn’t have access to your messages; your chat list; your groups; your contacts; your stickers; your profile name or avatar; or even the GIFs you search for.”

This lacking access, while excellent for user privacy, has frustrated law enforcement for years. It is a problem that is often referred to as “going dark,” in that the communications of criminals using end-to-end encrypted messaging apps are inaccessible to any third parties, including government investigators. Former Deputy Attorney General Rod Rosenstein has referenced the “going dark” problem, as has current FBI Director Christopher Wray. Many other representatives have, as well, and each time their refrain has stayed the same: End-to-end encrypted messaging apps provide a level of security that is too extreme to allow without a way for law enforcement to break through it.

But it’s magical thinking on the government’s part.

As many cybersecurity experts have explained over literal decades, allowing third parties to access secure, end-to-end encrypted communications will, by definition, make them less secure, functioning in effect as a backdoor. And a backdoor, in and of itself, is a security vulnerability.

Signal’s efforts to publicize its grand jury subpoena are notable—these requests often come with an instruction that the recipient not disclose any details of the request, else they risk jeopardizing an ongoing criminal investigation. These are valid concerns, but so are the concerns raised by Signal, which are that, even after all this time, government agents still believe that evidence can be conjured out of thin air.

The post Signal app insists it’s so private it can’t provide subpoenaed call data appeared first on Malwarebytes Labs.

City fined for tracking its citizens via their phones

The Dutch information watchdog—the Autoriteit Persoonsgegevens (AP)—has fined the city of Enschede for € 600,000 for tracking its citizens’ movements without permission. It is the first time that a Dutch government body has been fined by the AP. The investigation was set in motion after it received a complaint about tracking.

The Autoriteit Peroonsgegevens is the Dutch supervisor that has been commissioned to keep an eye on how companies and governments process Personally Identifiable Information (PII) in the Netherlands. In other words, it guards privacy-sensitive information, and how it is handled.

What did Enschede do wrong?

The city of Enschede hired a company to keep track of how crowded its city center was. The company they hired used Wi-Fi-tracking to measure how many people were present at one time. The Wi-Fi-tracking system assigned a unique ID to each passing phone that had Wi-Fi enabled (based on each phone’s unique MAC-address), so it could count the number of these phones. Which gave them a pretty accurate idea of the number of people.

However, because this method of measurement was used over a period of years (2017-2020) which overlapped with the period that the EU’s General Data Protection Regulation (GDPR) came into effect, the AP ruled that the method that was intended for counting, had turned into something that could be used for tracking.

The AP mentioned in its ruling that since a MAC-address is a unique identifier for a device, and since mobile devices like phones and tablets are mostly personal items, they can be used to identify a person. The system in Enschede used pseudonymization for the MAC addresses, but the AP ruled that was not enough to make the data truly anonymous, as they could still be combined with other data.

The AP ruled that the privacy of regular visitors and inhabitants of the city was compromised because they could be tracked without a real necessity. This was never the intention, but the fact that Wi-Fi-tracking over a prolonged period made this possible was reason enough for the steep fine.

In its ruling, the AP was adamant about the distinction between counting and tracking and emphasized how important it is that citizens should not be followed around, intentionally or not.

Tracking data can be turned into PII

If you find the same phone often enough, data intended for counting can be turned into data suitable for tracking. And if you put in enough effort and have enough data points you can establish patterns that can be used to identify a person (when this approach is used deliberately and legitimately, it’s called “Big Data”, for good reason). For example, if the same phone checks in at a certain point at 9 AM in the morning and leaves around 5 PM in the afternoon, you can make the assumption that the owner of that phone probably works in or near that location.

And even if none of the companies collecting or accessing that data intend to use it for that purpose, they or anyone buying or stealing the data, could.

The AP has strict rules about using Wi-Fi and Bluetooth-tracking and makes it clear that it is forbidden in most cases. It describes the large numbers of data points that can be collected by such tracking as “indirectly identifiable data” because while it is pseudonymous, it can be used to track people, and can be combined with other data to unmask individuals and render PII. For example, combining Wi-Fi-tracking with CCTV footage or payment data.

Who had access to the data?

The city and two companies that were involved in the measurements had access to the raw data. One of the companies carried out the order from the city and the other maintained the hardware and processed the data. The AP held the city responsible since it was the commissioning party. The city has filed an appeal against the ruling because they do not consider the data to be PII and their sole objective was counting, not tracking.

100 other cities

The company that operated the sensors in Enschede has 100 other cities and townships among its customers. But, when asked, it stated that the data gathered with Wi-Fi-tracking was no longer saved for more than 24 hours. Which, given the original goal for gathering the data makes perfect sense.

The post City fined for tracking its citizens via their phones appeared first on Malwarebytes Labs.

What is Smishing? The 101 guide

Smishing is a valuable tool in the scammer’s armoury. You’ve likely run into it, even if you didn’t know that is its name. It doesn’t arrive by email or social media direct message, instead choosing a route directly aimed at what may be your most personal device: the mobile phone. So, what is Smishing? We’re glad you asked.

Defining a Smish

Smishing is a combination of the words “phishing” and “SMS”, to indicate phishing sent across your mobile network in the form of a text. It’s often thought of as the latest scam on the block, but it’s been popular for a few years now. The Pandemic combined with a rise in home deliveries has only increased its popularity still further.

What is a Smishing attack?

It’s a fake message sent to mobile devices, using social engineering to encourage the recipient to click a link. The difference between Smishing and Vishing, is that Vishing is fraudulent voice messages as opposed to text and links.

Common Smish attempts focus on everyday needs or requirements. Late payments, missed deliveries, bank notifications, fines, and urgent notices are prime vehicles for a smishing attack.

COVID-19 has ensured that bogus vaccination messaging is also a common Smishing technique.

Most smishing text messages attempt to direct victims to fake login screens, with the possibility of asking for payment details further on. They may use URL shortening services in an attempt to conceal overtly fake login links. Potential victims may have never seen a Smish before, and so assume anything sent via SMS is legitimate. It may also be more difficult to view the full URL on a mobile browser, which is to the phisher’s advantage.

Smishing attack examples

Offering fake discounts on bills is a popular method of smishing attack. The drawback here is that these messages aren’t typically targeted. As a result, large numbers of people without the relevant accounts will simply disregard the message. This isn’t necessarily a problem for the smisher, however. These messages are sent in bulk, and the scammer expects a small number of responses from casting a wide net. The combined ill-gotten gains from the people who do fall for it, likely more than makes up for initial outlay.

Late / delayed parcels are a huge prospect for Smishers. If you wanted to define Smishing, this would be the current-day quintessential Smish attack. With so many people at home, and so many daily purchases made online, we’re awash with cardboard. It’s very difficult to keep track of everything coming into the house. Combining well-known delivery services with fake “delivery fee” notifications is a recipe for Smishing success.

A Royal Mail Smishing scam
A Smishing message asking for a “shipping fee” to be paid at a bogus website

In both examples, you can see the potential for success. Pinning these two attacks around what people can gain (or indeed, lose) gives them added credibility by playing on the hopes and fears of victims.

Can we stop these attacks?

The reality of this situation is, nobody can stop Smishing 100%. However, we can certainly take some steps to significantly reduce it:

  • If it sounds too good (or too bad) to be true, it probably is. Having said that, many Smish messages sound totally innocent and aren’t trying too hard to bribe or threaten. What we’re trying to say here, is don’t assume any message from services or organisations are the real deal. If you’re being asked to do something, the very best thing you can do is contact them directly via a known method you trust. When it turns out to be a fake, you should be able to report it to them, there and then.
  • Those living somewhere with Do Not Call lists or spam reporting services, should make full use of them. Report, report, report those bogus messages and numbers. Your mobile device may already have some form of “safe” message ID enabled without you knowing. It’s tricky to give specific advice here because of the sheer difference of options available on models of phone, but the Options / Safety / Security / Privacy menus are a good place to start.
  • Never click the links, and don’t enter personal information on the websites the Smisher sends you. Avoid replying to the scam SMS too. Best case scenario, it’s not a real number and your message bounces. Worst case, you’ve confirmed you exist and they add you to spam lists and / or start harassing you further. Report, block, and move on.

Anti-Smishing efforts

It’s not just phone owners doing their bit to tackle Smishing. Organisations have been taking steps to lock this threat down for some time now. Last year, the SMS SenderID Protection Registry gave companies the ability to register and protect message headers. We have Attorney Generals warning of the dangers, and the sheer saturation by fake Royal Mail delivery fee messages has made the issue go mainstream in the UK. We can only hope Smishing’s sudden rise to fame during the pandemic leads to an equally speedy demise.

For the time being, keep a watchful eye on those text messages and treat them with the same suspicion you’d give to a random missive in your email inbox.

The post What is Smishing? The 101 guide appeared first on Malwarebytes Labs.

Watch out! Android Flubot spyware is spreading fast

Using a proven method of text messages about missed deliveries, an old player on the Android malware stage has returned for an encore. This time it seems to be very active, especially in the UK where Android users are being targeted by text messages containing a link to a particularly nasty piece of spyware called Flubot.

Warning from the National Cyber Security Centre

On its website, the National Cyber Security Centre (NCSC) warns about the spyware that is installed after a victim receives a text message that asks them to install a tracking app, because of a missed package delivery. The tracking app is in fact spyware that steals passwords and other sensitive data. It will also access contact details and send out additional text messages in order to further the spread of the spyware.

Network providers join in

Apparently, the problem is so massive that even network providers have noticed the problem and some of them, including Three and Vodafone have also issued warnings to users over the text message attacks.

Three urges victims that have installed the spyware:

You should be advised that your contacts, SMS messages and online banking details (if present) may have been accessed and that these may now be under the control of the fraudster.

It goes on to tell victims that a factory reset is needed or you will run the risk of exposure to a fraudster accessing your personal data.

Branding of the text messages

Most of the reported messages pretend to be coming from DHL.

example of a smishing message
DHL example

But users have also reported Royal Mail and Amazon as the “senders.” Readers should be aware that it isn’t enough to simply watch out for messages from one or two senders though. If the campaign proves successful for the criminals running it, it will evolve and change over time and they will likely try other tactics.

History of Flubot

These types of smishing (SMS phishing) attacks are on the rise the last few years. Previously, Flubot has been noticed operating a fake FedEx website targeting Android users in Germany, Poland, and Hungary in basically the same way. By sending text messages with a parcel tracking URL that led to malware downloads. Initially they operated in Spain (with Correos Express as the sender), until some arrests were made there which slowed the operation down for a while. It would not come as a surprise if the continued success will lead the Flubot operators to target the US next.

Infection details

Malwarebytes for Android detects the several Flubot variants as Android/Trojan.Bank.Acecard, Android/Trojan.BankBot, or Android/Trojan.Spy.Agent.

As we pointed out the initial attack vector is a text message with a link that downloads the malware. The package names often include com.tencent and have the delivery service’s logo as the icon. During the install the malware will show you misleading prompts to get installed and acquire the permissions it needs to perform the actions it needs. These permissions allow it to:

  • Send messages to your contacts
  • Act as spyware and steal information

Depending on the variant, Flubot can also:

Don’t click!

Unless you know exactly what to look for to determine whether a message is actually coming from the claimed sender, it is better not to click on links in unsolicited text messages. Which is always solid advice, but when you are actually expecting a parcel, the message may not count as unsolicited in your mind.

Our first impulse is often to click and find out what’s up. At the very least, we should stop and ask if the message and the URL stand up to scrutiny. If you think the message is genuine, it is still best not to click on the link, but instead search for the vendor’s website and look for its parcel tracker.

If you did not click the link, simply remove the message from your device so you do not click it by accident in the future.

If you have clicked the link but then stopped because you were suspicious of the fact that it initiated a download, well done. You stopped in time.

If you did download the malware, scan your device with a legitimate Android anti-malware app. If it can’t disinfect your phone, you will need to perform a factory reset to remove it. If you do this, there is a possibility you will lose more than just the malware, unless you have made backups.

You should also change any passwords you stored on the device, and any you entered on the device after the infection began, because they may have been compromised by the spyware.

Finally, if you used the device for online banking, check your bank balances and contact your bank so that they can stop or correct any fraud that results.

Stay safe, everyone!

The post Watch out! Android Flubot spyware is spreading fast appeared first on Malwarebytes Labs.

Bitcoin scammers phish for wallet recovery codes on Twitter

We’re no strangers to the Twitter customer support DM slide scam. This is where someone watches an organisation perform customer support on Twitter, and injects themselves into the conversation at opportune moments hoping potential victims don’t notice. This is aided by imitation accounts modelled to look like the genuine organisation’s account. The victim is typically sent to a phishing page where accounts, payment details, identities, or other things can be stolen.

We first observed the technique used on gamers back in 2014, and it eventually branched out into bank phishing. This time around, it’s being used to bag bitcoin. Shall we take a look?

Emptying your wallet

Trust Wallet is an app used to send, receive, and store Bitcoin along with other cryptocurrencies, including NFTs. With cryptocurrency being so very mainstream at the moment, it’s only natural lots of people are jumping on the bandwagon. Even those who know what they’re doing often run into trouble. I suspect the newcomers to the field are experiencing all manner of issues daily. This is a perfect storm of confused users and scammers lying in wait.

Take note of what the official TrustWalletApp account says, in relation to keeping your coins safe:

They are emphatic about keeping the recovery phrase safe. This is a method to regain access to a wallet, made up of 12 words. Whoever possesses the phrase, holds the keys to the kingdom (or at least, your wallet). If your coins have a lot of value attached, it would clearly be disastrous to lose access.

This is where our tale begins in earnest, in the replies to that tweet.

Oh no, my coins!

An individual claims they had their coins stolen, but managed to regain them.

wallet phishing1

Thank God I finally got all my stolen coin and money back!

I can now rest my head.

So far, so good. Further down, however, it all goes a bit wrong. Just a few replies down, they say this:

I lost all my money and coins my wallet last week, until I contacted their support page and they helped me rectify and resolved it, I think if you have any of this problem you should write to them too at [URL removed]

The link (powered by a DIY survey creator, where anybody can make whatever batch of questions they want) does exactly what TrustWalletApp says not to do: asks for the 12 word recovery phrase.

wallet phishing7
A fake support form on a popular survey site asks users to break “The first rule of Crypto”
wallet phishing5
A fake support form in a Google Doc asks users to break “The first rule of Crypto”

A swarm of bad tidings

The scam isn’t being spread by just one account, nor is there just one bogus support form. Multiple Twitter profiles lurk in the replies of anyone having a bad cryptocoin experience. One even claims to be the “Trust Wallet Team”, and does nothing but spam links to a Google Doc. The accounts are most likely set up to autorespond to anybody sending messages to the TrustWalletApp account, especially if it looks like they need assistance. No fewer than 19 responses were sent in one day from one account, and given the ever-fluctuating cryptocurrency values, just one bite could result in a decently-sized payday for the scammers.

wallet phishing2
Scammers attempt to lure struggling cryptocoin owners into breaking the “First rule of Crypto”

This is a low maintenance attack, which brings potentially high gains. It’s very common, to the extent that one of the accounts sending bogus Google Doc links does so to the person, or bot, we originally saw firing out bad links!

What can you do to keep your coins secure?

This isn’t just imitation organisation accounts dropping themselves into support chats. We also have lots of random, non-imitation accounts trying the same tactic. As a result, “regular account” doesn’t necessarily mean they’re being helpful. The kindness of strangers is often very helpful, but never take anything for granted. Cryptocurrency is in a bit of a modern-day gold rush at the moment, and people will do absolutely anything to get their hands on it.

Legitimate companies are unlikely to be performing technical support via Google Docs or survey sites, so avoid links that attempt to do that. Most importantly though, as per the Trust Wallet team themselves: never send anybody your 12 word recovery phrase. Not even Trust Wallet. Ever.

Passwords, pass codes, pass phrases, pass-whatevers are meant to be secrets, and they aren’t secrets if you tell somebody else. No company worth bothering with will ever ask for your password so don’t give them out. It’s the surest way imaginable to lose control of an account. And, because of the way that cryptocurrencies work, once the scammers have your wallet, it’s theirs. You almost certainly won’t be able to recover it.

That’s one promise you can take to the crypto-bank.

The post Bitcoin scammers phish for wallet recovery codes on Twitter appeared first on Malwarebytes Labs.

Ransomware group threatens to leak information about police informants

UPDATE 12:12 PM Pacific Time, April 28: As of at least 9:40 AM Pacific Time, the Babuk ransomware gang removed any reference to the allegedly stolen DC Police Department data from its data leak website. This does not indicate with any certainty that the DC Police Department paid Babuk, but it is rare for a ransomware group to remove data without first receiving payment.

A screenshot captured by a Malwarebytes researcher is shown below, with no reference to the DC Police Department hack.

Babuk DC police screenshot
The Babuk ransomware group’s data leak website no longer shows any reference to the DC Police Department data hack. Credit: Malwarebytes

Original story below:

One day after a ransomware group shared hacked data that allegedly belonged to the Washington, D.C. Police Department online, the police force for the nation’s capital confirmed it had been breached.

“We are aware of unauthorized access on our server,” the Metropolitan Police Department—the official title of the DC police—said on Tuesday. “While we determine the full impact and continue to review activity, we have engaged the FBI to fully investigate this matter.”

But as the DC police sort out the attack, they’re working against the clock—the cyberattackers threatened to share information on police informants with criminal gangs in just three days, threatening the safety of those informants and the stability of related criminal investigations.

The attack represents the latest example in two growing trends, in which cybercriminals have increasingly targeted government agencies since the start of 2021, and in which ransomware operators are exchanging their bread-and-butter tactics—which include encrypting a victim’s files and then demanding a payment to unlock those files—with new threats to publish sensitive data.

Claiming responsibility for the DC police cyberattack is the ransomware gang Babuk. On Monday, the group said on a dark web data leak site that it had stolen 250 GB of data from the DC police, and it posted several screenshots as proof. According to Bleeping Computer, which viewed the images, the screenshots included folder names that related to “operations, disciplinary records, and files related to gang members and ‘crews’ operating in DC.”  

Bleeping Computer also shared Babuk’s threat that was made to the DC police:

“Hello! Even an institution such as DC can be threatened, we have downloaded a sufficient amount of information from your internal networks, and we advise you to contact us as soon as possible, to prevent leakage, if no response is received within 3 days, we will start to contact gangs in order to drain the informants, we will continue to attack the state sector of the usa, fbi csa, we find 0 day before you, even larger attacks await you soon.” 

The ransomware group also warned that one of the files in its possession could be related to arrests made following the January 6 insurrection against the US Capitol.

The attack, while severe, is part of an increasingly commonplace trend. According to the New York Times, this is the third police department hit by cybercriminals in just three weeks. Further, since the start of 2021, 26 government agencies have been victims of ransomware attacks, and 16 of those agencies were specifically hit with threats to publish sensitive data.

These attacks follow what Malwarebytes has called a “double extortion” model, in which ransomware operators hit the same target two times over—not only locking a victim’s files, which will cost money to decrypt, but also stealing sensitive data, which will also cost money to keep private.

The double extortion model is relatively new, but it is already popular.

According to a March analysis from the cybersecurity company F-Secure, nearly 40 percent of the ransomware families discovered in 2020, as well as several older families, demonstrated data exfiltration capabilities by year’s end. And almost half of those families used those capabilities in the wild. Further, as we learned in the Malwarebytes State of Malware 2021 report, the double extortion model has proved to be surprisingly lucrative: One ransomware group pulled in $100 million in 2019 without pressing victims to unlock encrypted files.

That Babuk—which was discovered by Bleeping Computer just months ago—has already incorporated the double extortion model likely means that this threat will not be going away any time soon.

The post Ransomware group threatens to leak information about police informants appeared first on Malwarebytes Labs.

Password manager hijacked to deliver malware in supply chain attack

In the latest example of a supply chain attack, cybercriminals delivered malware to customers of the business password manager Passwordstate by breaching its developer’s networks and then deploying a fraudulent update last week, said Passwordstate’s maker, Click Studios.

Though the number of infected computers is currently unknown, Click Studios said in an April 24 advisory that the victim count “appears to be very low.” That estimate may increase though, said Click Studios, as it continues to investigate. According to the company, its password manager is used by more than 29,000 customers across the industries of banking, retail, manufacturing, education, healthcare, government, aerospace, and more.

The attack lasted just 28 hours, Click Studios said, from April 20, 8:33 PM UTC to April 22, 0:30 AM UTC. Only the customers who initiated an update between those hours are at risk.

As to how the cybercriminals breached the company, Click Studios only said that “a bad actor using sophisticated techniques compromised the In-Place Upgrade functionality.” Click Studios said the “initial compromise was made to the upgrade director located on Click Studios website www.clickstudios.com.au,” which regularly points Passwordstate’s in-place upgrade function to approved software versions loaded onto Click Studios’ Content Distribution Network, or CDN. By compromising the in-place upgrade functionality, the cybercriminals were able to point users to their own CDN, which carried the malware.

The malware—currently referred to as Moserpass—stole system information and Passwordstate data and then delivered it back to the servers that were controlled by the cybercriminals responsible for the attack.  

That data included computer name, username, domain name, current process name, current process ID, and several fields from a customer’s Passwordstate account, including title, username, description, notes, URL, and password. Data for certain “generic field” entries was also delivered, but Click Studios said that users who chose to encrypt that data averted the malware’s data harvesting and delivery capabilities.

Click Studios also clarified in its April 24 advisory that, “although the encryption key and database connection string are used to process data via hooking into the Passwordstate Service process, there is no evidence of encryption keys or database connection strings being posted to the bad actor CDN.”

According to Bleeping Computer, the CDN servers used in the attack are no longer active.

The Passwordstate attack is the latest example of a re-emerging cyberthreat that saw great attention back in 2013 when the US retailer Target suffered an enormous data breach that compromised the payment information of 41 million customers. That attack, which resulted in an $18.5 million settlement, began with an attack on the company’s HVAC vendor. Four years later, cybercriminals again relied on a supply chain attack to breach Equifax, and just two years after that, the SolarWinds supply chain attack rattled the entire cybersecurity industry.

These attacks are difficult to catch, and for an attack like the one that targeted Passwordstate, they pose a significant threat to cybersecurity overall, as users and businesses could begin to question the legitimacy of regular software updates.

For Passwordstate’s customers who did install the fraudulent update, Click Studios advised to contact the company’s customer support, and, following specific instructions, to begin resetting all passwords saved in Passwordstate.

The post Password manager hijacked to deliver malware in supply chain attack appeared first on Malwarebytes Labs.

Zoom deepfaker fools politicians…twice

We recently said deepfakes “remain the weapon of choice for malign interference campaigns, troll farms, revenge porn, and occasionally humorous celebrity face-swaps”. Skepticism that these techniques would work on a grand scale such as an election, remains in place. In the realm of malign interference and smaller scale antics, however, deepfakes continue to forge new ground.

It’s one thing to pretend to be anonymous law enforcement operatives at the other end of a web call, with no deepfake involvement. It’s quite another to deepfake the aide of a jailed Russian opposition leader.

Zooming into deepfake territory

Multiple groups of MPs were recently tricked into thinking they were talking to Leonid Volkov, a Russian politician and chief of staff to Alexei Navalny’s 2018 presidential election campaign. Instead, Dutch and Estonian MPs at different meetings were presented with an entirely fictitious entity forged in the deepfake fires. From looking at the various reports on these incidents, we’re not entirely sure if fake Leonid responded to questions or stuck to a pre-written script. We also don’t know if the culprits faked his voice, or spliced real snippets to form sentences. Based on this report, it appears the Zoom call was conversational, but details are sparse. The aim of the game was most likely to have MPs say they want to support Russian opposition with lots of money. 

How did this happen?

It appears basic security practices were not followed. Nobody verified it was him beforehand. His email wasn’t pinged, nobody said “Hey there…” on social media. This is rather incredible, considering people doing an Ask Me Anything on Reddit will hold up a “Hi Reddit, it’s me” note as a bare minimum. With such a non-existent security procedure in place, disaster is sure to follow.

One wonders, given the absence of contact with the real Leonid, how fake Leonid had the Zoom sessions arranged in the first place. Can anyone arrange a call with a room of MPs if they claim to be somebody else? Do online meetings regularly take place with no effort to ensure everyone involved is legitimate? This all seems a little bit peculiar and faintly worrying.

Locking down deepfakes: in it for the long haul

Outside the realm of verification-free Zoom calls with parliamentarians, more moves are afoot to detect deepfakes. SONY has stepped into a battleground already populated by DIY tools and researchers trying to fight fakery online. Elsewhere, we have AI generated maps. While this sounds scary, it’s not something we should be panicking over just yet.

Deepfakes continue to become more embedded in public consciousness which can only help raise awareness of the subject. You want some Young Adult fiction about deepfakes? Sure you do! Actors helping to popularise the concept of fake video as something to be expected? Absolutely. Wherever you turn…there it is.

Low-level noise and quiet misdirection

For now, malign interference campaigns and small-scale shenanigans are the continued order of the day. It’s never been more important to take some steps to verify your web-based conversationalists. Whether an AI-generated deepfake or someone with a really convincing wig and fake voice, politicians need to enact some basic verification routines.

The real worry here is that if they fell for this, who knows what else slipped by them via email, social media, or even plain old phonecalls. We have to hope that whatever verification systems are in place for alternate methods of communication among politicians are significantly better than the above.

The post Zoom deepfaker fools politicians…twice appeared first on Malwarebytes Labs.

A week in security (April 19 – 25)

Last week on Malwarebytes Labs, we interviewed Youssef Sammouda, a 21-year-old bug bounty hunter who is focused on finding vulnerabilities on Facebook.

We looked into the CodeCov supply-chain attack, the vulnerabilities in Pulse Secure VPN that are being actively exploited by attackers, and the discovery of SUPERNOVA malware found on a SolarWinds Orion server.

We also featured technology, particularly facial recognition, used by the FBI to identify one of the Capitol rioters several months after it happened; we covered news about a FIN7 sysadmin being indicted for 10 years for “billions in damage”; and the calling out of EU’s proposed ban on the use of artificial intelligence, because it doesn’t deal with its potential for high abuse. Lastly, we have provided a comprehensive guide on how to pick the best VPN for you, whether you stream, play video games, or torrent.

Other cybersecurity news

Stay safe!

The post A week in security (April 19 – 25) appeared first on Malwarebytes Labs.

11-13 year old girls most likely to be targeted by online predators

The Internet Watch Foundation (IWF), a not-for-profit organization in England whose mission is “to eliminate child sexual abuse imagery online”, has recently released its analysis of online predator victimology and the nature of sexual abuse media that is currently prevalent online. The scope of the report covered the whole of 2020.

IWF annual report: what the numbers reveal

The IWF assessed nearly 300,000 reports in 2020, wherein a little more than half of these—153,383—were confirmed pages containing material depicting child sexual abuse. Compared to their 2019 numbers, there was a 16 percent increase of pages hosting such imagery or being used to share.

From these confirmed reports, the IWF were able to establish the following trends:

The majority of child victims are female. There has been an increase in the number of female child victims since 2019. In 2020, the IWF has noted that 93 percent of the child sexual abuse material (CSAM) they assessed involved at least one (1) female child. That’s a 15 percent increase compared to numbers in 2019.

06 2.1.0 SexBreakdown Pie x3
Females dominate the victimization type in online child abuse imagery. On the other hand, imagery involving males has significantly decreased since 2019, from 17 percent to 3 percent. (Source: IWF Annual Report 2020)

Online predators are after children ages 11-13. The IWF counted a total of 245,280 hashes—unique codes representing different pictures, videos or other CSAM—the majority of which involve females, where a child victim is 11-13 years of age. This is followed by children aged 7 to 10 years of age.

07 2.1.0 AgeByHashes Bar
These hash statistics show a clear trend: a great majority of predators are after imagery of children aged 7 to 13. (Source: IWF Annual Report 2020)

To learn more about the IWF Hash List, watch this YouTube video.

Tink Palmer, CEO of the Marie Collins Foundation, a charity group that helps child victims and their families to recover from sexual abuse involving technology, told the IWF why online predators gravitate within these age groups.

“In many cases it is pre-pubescent children who are being targeted. They are less accomplished in their social, emotional and psychological development. They listen to grown-ups without questioning them, whereas teenagers are more likely to push back against what an adult tells them.”

08 2.1.1 AgeAnaylsis Bar
Age breakdown of child sexual abuse graph, which further supports this trend against 11 – 13 year old girls. (Source: IWF Annual Report 2020)

Self-generated child sexual abuse content are on an uptick. 44 percent of images and videos analyzed by IWF in 2020 are classed as “self-generated” child sexual abuse content. This is a 77 percent increase from 2019 (wherein they received 38,400 reports) to 2020 (wherein they received 68,000 reports).

“Self-generated” content means that the child victims themselves created the media that online predators propagate within and beyond certain internet platforms. Such content is created with the use of either smartphones or webcams, predominantly by 11 to 13 year old girls within their home (usually, their bedroom) and created during periods of COVID-19 lockdowns.

Content concerning the use of webcams are often produced using an online service with a live streaming feature, such as Omegle.

47 2.1.2 Age11 13 Sex Bar
Statistics on self-generated abuse vs contact sexual abuse among female children who are aged 11 to 13 years old (Source: IWF Annual Report 2020)

Europe is found hosting almost all child sexual abuse URLs. The IWF has identified that 90% of the URLs it analyzed and confirmed to house CSAM were hosted in Europe, in which they also included Russia and Turkey. Among all countries in Europe, the Netherlands is the prime location for hosting CSAM, a constant that the IWF has seen through the years.

63 Top10Countries Map
Due to lower cost of web hosting, 77% of CSAM are physically hosted on servers in the Netherlands. (Source: IWF Annual Report 2020)

Shutting the door on child sexual abusers

The IWF report highlights a worrying trend on child victimology and gives us an idea that online predators not only groom their targets but also coerce and bully them to do their bidding. And child predators usually frequent platforms that a lot of teenage girls use.

Sadly, there is no single measure or piece of technology that can solve the problem of child exploitation. The best protection for children is effective parenting, and the IWF urges parents and guardians to be T.A.L.K. to their children. T.A.L.K. is a list of comprehensive and actionable steps parents and/or carers can take to help guide their children through a safer online journey as they grow up. T.A.L.K. stands for:

* Talk to your child about online sexual abuse. Start the conversation – and listen to their concerns.

* Agree ground rules about the way you use technology as a family.

* Learn about the platforms and apps your child loves. Take an interest in their online life.

* Know how to use tools, apps and settings that can help to keep your child safe online.

If images or videos of your child have been shared online, it’s important for parents not to blame the child. Instead, reassure them and offer support. Lastly, make a report to the police about these images or videos, IWF, Childline, or your local equivalent.

“Don’t be shy. You look so pretty in your picture, Evie. Just wanna see what you’ve got under there. Just for me.”

The post 11-13 year old girls most likely to be targeted by online predators appeared first on Malwarebytes Labs.