IT NEWS

Fake job offer leads to $600 million theft

Back in March, popular NFT battler Axie Infinity lay at the heart of a huge cryptocurrency theft inflicted on the Ronin network. From the Ronin newsletter:

There has been a security breach on the Ronin Network. Earlier today, we discovered that on March 23rd, Sky Mavis’s Ronin validator nodes and Axie DAO validator nodes were compromised resulting in 173,600 Ethereum and 25.5M USDC drained from the Ronin bridge in two transactions. The attacker used hacked private keys in order to forge fake withdrawals. We discovered the attack this morning after a report from a user being unable to withdraw 5k ETH from the bridge.

These validator nodes act as a means to prevent criminals making off with lots of money. In order to put together a bogus transaction, they’d need to gain access to 5 out of 9 validator nodes. The successful attack happened over 2 stages.

An unwary employee provided one foot in the door. An unrevoked permission elsewhere kicked it wide open.

The trap is set

According to The Block, everything fell into place thanks to a senior engineer at the game developer. Two inside sources claim the engineer was fooled by a fake job offer. In fact, it seems multiple employees were approached and encouraged to put in applications. Scams originating from LinkedIn accounts are popular at the moment. As it happens, this is where scammers tried to persuade various people on the development team.

One individual is all it took to empty out a big slice of cryptocurrency funds. A job offer made after several interviews was enough to convince the victim to get on board. Perhaps the “extremely generous” compensation package offered should have set off some alarm bells. Having said that, anything digital finance related likely has huge amounts of cash available.

We rate this job offer a 4 out of 5

Unbeknownst to the engineer, everything came crashing down once they received the job offer. A booby-trapped PDF granted the attackers access to Ronin systems, and they were able to compromise 4 out of the required 5 nodes.

Just one node remained to be compromised. How did they do it?

Step up to the plate, non-revoked access. When employees leave an organisation, it’s a good idea to remove access to networks and devices. Unknown entities will happily make use of unattended credentials or permissions. Sure enough, that’s what happened here.

Nudging a node

A Decentralised Autonomous Organization (DAO) is a way for people in a community to make decisions on a project. The developers approached an Axie DAO for assistance with transactions in November 2021. Ultimately, this is where the fifth node compromise starts to take shape. The issue isn’t that the DAO exists. The issue is the permissions granted to the DAO.

From the Substack post detailing the attack:

At the time, Sky Mavis controlled 4/9 validators, which would not be enough to forge withdrawals. The validator key scheme is set up to be decentralized so that it limits an attack vector, similar to this one, but the attacker found a backdoor through our gas-free RPC node, which they abused to get the signature for the Axie DAO validator.  

This traces back to November 2021 when Sky Mavis requested help from the Axie DAO to distribute free transactions due to an immense user load. The Axie DAO allowlisted Sky Mavis to sign various transactions on its behalf. This was discontinued in December 2021, but the allowlist access was not revoked.

Who takes the blame?

In April, the US Department of Treasury pinned this one on North Korean hacking group Lazarus. Research elsewhere details Lazarus attacks on both the aerospace and defence sector involving bogus job posts. There’s no mention of those attacks having any connection to what happened above. However, the research does highlight further recruitment scams using LinkedIn as the starting point.

Whether you’re operating in a cryptocurrency / web3 realm or not, forgotten permissions could cost you dearly. There’s also the fake job offer approach to consider, too. In recent months we’ve seen other game developers targeted. Deepfakes are now worming their way into the bogus job scene. Malware is rife in this sort of operation, so please be cautious around any promising new offer.

The post Fake job offer leads to $600 million theft appeared first on Malwarebytes Labs.

Report: Brazil must do more to encrypt, back up data

Federal government organisations in Brazil may need to reassess their approach to cyberthreats, according to a new report by the country’s Federal Audit Court. It outlines multiple key areas of concern across 29 key areas of risk. One of the biggest problems in the cybercrime section of the report relates to backups. Specifically: The lack of backups when dealing with hacking incidents.

Backups in Brazil: An uphill struggle

Backups are an essential backstop that can help against several forms of attack, as well as mistakes and mishaps. The most obvious one of those would be ransomware. When networks are compromised and systems are locked up, victims with effective backups can try to restore their systems to a point in time before the attack.

Not having backups leaves victims with very limited options. Assuming the attackers don’t just vanish into the night, the business may decide to pay the ransom and recover the encrypted files. At best, that is a slow, manual process. If things go badly, the decryption tools may be broken and fail to recover data. In some cases, they may not even exist. At this point, an organisation is out of pocket and files.

This is enough to cause showstopping issues for any organisation. And if the affected business performs critical tasks, attacks can have alarming consequences for the community at large. Healthcare and law enforcement are good examples of this.

As a result, getting up to speed on backing up data has become more prominent in recent years. In fact, not just backing up. It’s important that organisations create sensible, organised backups which can be deployed in a crisis. You can’t roll back properly if the files are disorganised and nobody can make sense of which folder goes where.

With this in mind, the statistics don’t make for great reading.

The numbers game

According to the report:

  • 74.6% of organizations (306 out of 410) do not have a formally approved backup policy—basic document, negotiated between the business areas (“owners” of the data/systems) and the organization’s IT, with a view to disciplining issues and procedures related to the execution of backups.
  • 71.2% of organizations that host their systems on their own servers/machines (265 out of 372) do not have a specific backup plan for their main system.
  • 66.6% of organizations that claim to perform backups (254 out of 385), despite implementing physical access control mechanisms to the storage location of these files, do not store them encrypted, which carries a risk of data leakage from the organization, which can cause enormous losses, especially if it involves sensitive and/or confidential information.
  • 60.2% of organizations (247 out of 410) do not keep their copies in at least one non-remotely accessible destination, which carries a risk that, in a cyberattack, the backup files themselves end up being corrupted, deleted and/or encrypted by the attacker or malware, rendering the organization’s backup/restore process equally ineffective.

Backing up: Not a guaranteed fix

The report notes that various initiatives already exist to get people talking about the need for both encryption and backing up. While any rise in backup numbers is a good thing, it’s not necessarily going to come close to solving problems.

One of the worst offshoots of standard ransomware attacks in the past few year is the rise of “double extortion”, where ransomware authors steal data before it’s encrypted, and then threaten to release it if the ransom isn’t paid. One of the reasons double extortion attacks came about is precisely because backups don’t work against data leaks.

For organizations that do keep backups, the challenge is how to set them up and maintain them so they do what’s expected, when they are needed most. This is surprisingly difficult.

David Ruiz, host of Malwarebytes’ Lock and Code podcast recently spoke to backup expert Matt Crape, a technical account manager at VMWare, to find out why backups often fail when it really matters, and how to ensure they don’t.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

The post Report: Brazil must do more to encrypt, back up data appeared first on Malwarebytes Labs.

Apple Lockdown Mode helps protect users from spyware

Apple has announced a new feature of iOS 16 called Lockdown Mode. This new feature is designed to provide a safer environment on iOS for people at high risk of what Apple refers to as “mercenary spyware.” This includes people like journalists and human rights advocates, who are often targeted by oppressive regimes using malware like NSO Groups’ Pegasus spyware.

NSO is an Israeli software firm known for developing the Pegasus spyware. Pegasus has been in the spotlight frequently in recent years, and although NSO claims to limit who is allowed to have access to its spyware, each new finding shows it is used by authoritarian nations bent on monitoring and controlling critics. In one of the more infamous cases, and a classic example of how Pegasus has been used, journalist Jamal Khashoggi’s murder has been traced to the use of Pegasus by the United Arab Emirates.

How is this possible? iPhones don’t get viruses!

Contrary to popular belief, it is, in fact, possible for an iPhone to become infected with malware. However, this usually involves a fair bit of effort, usually the use of one or more vulnerabilities in iOS, the operating system for the iPhone. Such vulnerabilities have been known to sell for prices in the million dollar range, which puts such malware out of the reach of common criminals.

However, companies like NSO can purchase – or pay expert iOS reverse engineers to find – these vulnerabilities and incorporate them into a deployment system for their malware. They then sell access to this malware to make money.

Deploying the malware typically happens via something like a malicious text message, phone call, website, etc. For example, NSO has been known to use one-click vulnerabilities in Apple’s Messages app to infect a phone if a user taps a link. It’s also used more powerful zero-click vulnerabilities, capable of infecting a phone just by sending it a text. Because of this, Apple made changes in iOS 14 to harden Messages.

Messages is not the only possible avenue of attack, however, and it’s impossible to build a wall for the “walled garden” that doesn’t have some holes in it. Every bug is a potential avenue of attack, and all software has bugs. Thus, Apple has taken things to a new level by providing Lockdown Mode to high-risk individuals.

What is Lockdown Mode?

Lockdown Mode puts the iPhone into a state where it is more difficult to attack. This is done by limiting what is allowed in certain aspects of usage of the phone, to block potential avenues of attack. The improvements, per Apple’s press release, include:

  • Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.
  • Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.
  • Apple services: Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously sent the initiator a call or request.
  • Wired connections with a computer or accessory are blocked when iPhone is locked.
  • Configuration profiles cannot be installed, and the device cannot enrol into mobile device management (MDM), while Lockdown Mode is turned on.
Screenshot of a phone being put into Lockdown Mode.
Source: Apple

Although Apple refers to Lockdown Mode as “an extreme, optional protection,” the limitations don’t actually sound particularly difficult to live with. Most users probably won’t turn this on, but these restrictions sound like something that the average user could adapt to fairly easily.

Backing it up with big bucks

To further bolster the security of Lockdown Mode, Apple is offering an unprecedented $2 million bug bounty to anyone who can find a qualifying vulnerability that can be exploited while an iPhone is in Lockdown Mode. Vulnerability hunters everywhere will be pounding on Lockdown Mode, looking for a bug that will score them the big payout. However, this bounty will also help to ensure that the vulnerability gets disclosed to Apple rather than being sold to a company like NSO. It’s easy to do the right thing when it also earns you $2 million!

Should I use Lockdown Mode?

If you’re a journalist and you cover topics that may put a target on your back, absolutely! If you’re a defender of human rights and a critic of countries that trample on those rights, don’t think twice.

Average people, though, will have little chance of ever encountering “mercenary spyware.” However, the extra security definitely wouldn’t hurt, and it looks like it won’t cost you a lot in terms of lost functionality.

The post Apple Lockdown Mode helps protect users from spyware appeared first on Malwarebytes Labs.

Verified Twitter accounts phished via hate speech warnings

Verified Twitter accounts are once again under attack from fraudsters, with the latest phish attempt serving up bogus suspension notices.

Hijacking verified accounts on any platform is a big win for fraudsters. It gives credibility to their scams, especially when the accounts have large followings. This has been a particularly popular tactic to promote NFTs and other crypto-centric scams.

Most recently, we saw hijacked verified accounts pushing messages claiming other verified users had been flagged for spamming. In that instance, compromised accounts were made to look like members of Twitter’s support team.

Hate speech warnings via DM

This time around, the attack is less publicly visible, working its magic via DM instead of posting out in the open. The message sent to a Bleeping Computer reporter reads as follows:

Hey

Your account has been flagged as inauthentic and unsafe by our automated system, spreading hate speech is against our terms of service. We at Twitter take the security of our platform very seriously. That’s why were are suspending your account in 48h if you don’t complete the authentication process. To authenticate your account, follow the link below.

The site, hidden behind a URL shortening service, claims visitors are logging in to “Twitter help center”. Making use of Twitter APIs to call up the reporter’s test account name, it then asks for their password. A “welcome back” message alongside an image of the reporter’s profile picture makes it all seem that little more bit real.

The phishing site then asks for an email address, and appears to be checking behind the scenes to ensure you’re entering valid details. No spamming the database with deliberately incorrect information here!

The fake site displays a message which claims the account has been proven to be authentic (and in a very twisted way, it has). At this point, the phished victim likely assumes all is well and goes about their day. Meanwhile, the phisher is free to do whatever they want with the now stolen account.

Be careful out there

Whether verified or not, treat warning messages claiming to be from anyone on social media with suspicion. If they’re providing login links tied to threats of suspension, you’re better off visiting the site and contacting support directly.

The post Verified Twitter accounts phished via hate speech warnings appeared first on Malwarebytes Labs.

Google to delete location data of trips to abortion clinics

The historical overturning of Roe v. Wade in June prompted lawmakers and technology companies to respond with deep concern over the future of data. Google is one of those companies.

In a post to “The Keyword” blog last week, Google said it will act further in protecting its users’ privacy by automatically deleting historical records of visits to sensitive locations. These include abortion clinics, addiction treatment facilities, counseling centers, domestic violence shelters, fertility centers, and other places deemed as sensitive locations.

Once Google determines that a user visited a sensitive place, it will delete this data from Location History after the visit. This change will take effect in the coming weeks.

Google’s Location History is off by default, but if a user has turned it on, the company has already provided the tools they can use to easily delete part or all of their data.

Google also has plans to roll out an update allowing users of Fitbit, which tracks periods, to delete multiple logs at once.

However, in a post-Roe America, these changes may still not be enough. As The Verge points out, Google still collects a lot of user activity data, such as Search and YouTube histories, which can be used as evidence against investigations. Google didn’t mention anything about this in its blog.

Remember also that Google provides user data when served a valid court order. This is enough to get someone’s entire search history in the hands of police for investigation. Although it would not prove guilt, it’s “a liability” for women seeking assistance with abortion, especially in states where abortion has been deemed illegal.

Furthermore, Google is not the only source law enforcement could go to for evidence. Police can also access women’s health records, as HIPAA does not protect against court-issued warrants. There are also data brokers selling location data of people visiting abortion clinics. Though the datasets are allegedly anonymized, according to Vice, it’s possible to de-anonymize users from aggregate data.

The post Google to delete location data of trips to abortion clinics appeared first on Malwarebytes Labs.

IconBurst software supply chain attack offers malicious versions of NPM packages

Researchers discovered evidence of a widespread software supply chain attack involving malicious Javascript packages offered via the npm package manager. The threat actors behind the IconBurst campaign used typosquatting to mislead developers looking for very popular packages.

npm

npm is short for Node package manager, a name that no longer covers the load. npm is a package manager for the JavaScript programming language maintained by npm, Inc. It consists of a command line client, also called npm, and an online database of public and paid-for private packages, called the npm registry. The free npm registry has become the center of JavaScript code sharing, and with more than one million packages, the largest software registry in the world.

Even non-developers may have heard of Node.js, a JavaScript runtime built on Chrome’s V8 JavaScript engine. Node.js is an open-source, cross-platform, back-end JavaScript runtime environment that runs on the V8 engine and executes JavaScript code outside a web browser. npm is the default package manager for Node.js.

Malicious fakes

Researchers at ReversingLabs identified more than two dozen such NPM packages. The packages dated back up to six months, and contain obfuscated Javascript designed to steal form data from individuals using applications or websites that included the malicious packages.

The malicious packages serviced downstream mobile and desktop applications as well as websites. In one case, a malicious package has been downloaded more than 17,000 times. The attacker used a typosquatting technique to trick developers into using the malicious packages.

Typosquatting

Typosquatting is a term you may have seen when reading about Internet scams. In essence it relies on users making typos when entering a site or domain name. Sometimes typosquatting includes techniques like URL hijacking and domain mimicry, but mostly it relies on intercepting typos, hence the name.

In this case, the attackers offer up packages via public repositories with names that are very similar to legitimate packages like umbrellajs and packages published by ionic.io.

Supply chain attack

A supply chain attack, also called a value-chain or third-party attack, occurs when someone attacks you or your system through an outside partner or provider. Attackers can deploy supply chain attacks to exploit trust relationships between a target and external parties.

This attack can be categorized as a supply chain attack because the developer falling for the typosquatting trick is not the victim. Ultimately, the user filling out a form on a website created by the developer that used a contaminated package is the actual victim of the attack.

Obfuscated code

The researchers’ attention was drawn by the use of the same obfuscator in a wide range of npm packages over the past few months. Obfuscation, although uncommon, is not unheard of in open source development. Often obfuscation techniques aim to hide the underlying code from prying eyes, but the Javascript Obfuscator used in this attack also reduces the size of JavaScript files.

Following the obfuscation trail, the developers found similarly named packages that could be connected to one of a handful of NPM accounts.

The goal

After deobfuscation, it became clear that the authors integrated a known login stealing script into the popular npm packages. The script designed to steal information from online forms, originates from a hacking tool called “Hacking PUBG i’d”. PUBG is an online multiplayer shooter with an estimated billion players. Some of these packages are still available for download at the time of writing.

Once again this attack shows us that the way in which developers rely on the work of others is not backed up by a way to detect malicious code within open source libraries and modules.

The researchers’ blog contains a list of packages and associated hashes of the malicious packages for developers that suspect they may have fallen victim to this attack.

Stay safe, everyone!

The post IconBurst software supply chain attack offers malicious versions of NPM packages appeared first on Malwarebytes Labs.

Discord Shame channel goes phishing

A variant of a popular piece of social media fraud has made its way onto Discord servers.

Multiple people are reporting messages of an “Is this you” nature, tied to a specific Discord channel.

The message reads as follows:

heyy ummm idk what happened of its really you but it was your name and the same avatar and you sent a girl erm **** stuff like what the ****? [url] check #shame and youll see. anyways until you explain what happened im blocking you. sorry if this is a misunderstanding but i do not wanna take risks with having creeps on my friendslist.

The server is called Shame | Exposing | Packing | Arguments.

Visitors to the channel are asked to log in via a QR code, and users of Discord are reporting losing access to their account after taking this step. Worse still, their now compromised account begins sending the same spam message to their own contacts.

Discord itself warned users over two years ago to only scan QR codes taken directly from their browser, and to not use codes sent by other users. Unfortunately this has been a concern for unwary Discord users for some time now.

Tips to keep your Discord account secure

  1. Enable two-factor authentication (2FA). While you’re doing this, download your backup codes too. Should you land on a regular phishing page and hand over login details, the attacker will still need your 2FA code to do anything with your account. Note: Some phishers are now stealing 2FA codes too, so this isn’t foolproof, but it is a good security step to have anyway.
  2. Turn on server wide 2FA for channel admins. This means that only admins with 2FA enabled will be able to make use of their available admin powers. This should hopefully keep the channels you’re in that little bit more secure.
  3. Use Privacy and Safety settings. Tick the “Keep me safe” box under “Safe direct messaging”. This means all direct messages will be scanned for age restricted content. You can also toggle “Allow direct messages from server members” to restrict individuals who aren’t on your friends list.
  4. Make use of the block and friend request features. You can tell Discord who, exactly, is able to send you a friend request. Choose from “Everyone”, “Friends of friends”, and “Server members”.
  5. Report hacked and suspicious accounts. Pretty much every option you can think of is available in the Trust & Safety section for reporting rogue accounts and bad behaviour. Individual messages can be reported, and you can see how bad actors are prevented from scraping your user data for nefarious purposes. Finally, a form exists for you to report specific bots sending harmful links.

The post Discord Shame channel goes phishing appeared first on Malwarebytes Labs.

Cloud-based malware is on the rise. How can you secure your business?

There’s a lot of reasons to think the cloud is more secure than on-prem servers, from better data durability to more consistent patch management — but even so, there are many threats to cloud security businesses should address. Cloud-based malware is one of them.

Indeed, while cloud environments are generally more resilient to cyberthreats than on-prem infrastructure, malware delivered over the cloud increased by 68% in early 2021 — opening the door for a variety of different cyber attacks.  

But you might be asking yourself: Doesn’t my cloud provider take care of all of that cloud-based malware? Yes and no.

Your cloud provider will protect your cloud infrastructure in some areas, but under the shared responsibility model, your business is responsible for handling many security threats, incidents, responses, and more. That means, in the case of a cloud-based malware attack, you need to have a game plan ready.

In this post, we’ll cover four ways you can help secure your business against cloud-based malware.

What ways can malware enter the cloud?

One of the main known ways the malware can enter the cloud is through a malware injection attack. In a malware injection attack, a hacker attempts to inject malicious service, code, or even virtual machines into the cloud system.

The two most common malware injection attacks are SQL injection attacks, which target vulnerable SQL servers in the cloud infrastructure, and cross-site scripting attacks, which execute malicious scripts on victim web browsers.  Both attacks can be used to steal data or eavesdrop in the cloud.

Malware can also get into the cloud through file-upload.

Most cloud storage providers today feature file-syncing, which is when files on your local devices are automatically uploaded to the cloud as they’re modified. So, if you download a malicious file on your local device, there’s a route from there to your business’ cloud — where it can access, infect, and encrypt company data.

In fact, malware delivered through cloud storage apps such as Microsoft OneDrive, Google Drive, and Box accounted for 69% of cloud malware downloads in 2021

Four best practices to prevent cloud-based malware

1. Fix the holes in your cloud security

As we covered in our post on cloud data breaches, there are multiple weak points that hackers use to infiltrate cloud environments — and once they find a way into your cloud, they can drop cloud-based malware such as cryptominers and ransomware.

Fixing the holes in your cloud security should be considered one of your first lines of defense against cloud-based malware. Here are three best practices:

  • Set up your cloud storage correctly: This is relevant if your cloud storage is provided as Infrastructure-as-a service (like Google Cloud Storage or Microsoft Azure Cloud Storage). By not correctly setting up your cloud storage, you risk becoming one of many companies who suffer a cloud data breach due to a misconfiguration.

2. Protect your endpoints to detect and remediate malware before it can enter the cloud

Let’s say you’re the average small to mid-sized company with up to 750 total endpoints (including all company servers, employee computers, and mobile devices). Let’s also say that a good chunk of these endpoints are connected to the cloud in some way — via Microsoft OneDrive, for example.

At any time, any one of these hundreds of endpoints can become infected with malware. And if you can’t detect and remediate the malware as soon as an endpoint gets infected, there’s a chance it can sync to OneDrive — where it can infect more files.

This is why endpoint detection and response is a great “second line of defense” against cloud-based malware.

Three features of endpoint detection and response that can can help track and get rid of malware include:

  • Suspicious activity monitoring: EDR constantly monitors endpoints, creating a “haystack of data“ that can be analyzed to pinpoint any Indicators of Compromises (IoCs).
  • Attack isolation: EDR prevents lateral movement of an attack by allowing isolation of a network segment, of a single device, or of a process on the device.  
  • Incident response: EDR can map system changes associated with the malware, thoroughly remove the infection, and return the endpoints to a healthy state.

3. Use a second-opinion cloud storage scanner to detect cloud-based malware

Even if you have fixed all the holes in your cloud security and use a top-notch EDR product, the reality is that malware can still make it through to the cloud — and that’s why regular cloud storage scanning is so important.

No matter what cloud storage service you use you likely store a lot of data: a mid-sized company can easily have over 40TB of data stored in the form of millions of files. 

Needless to say, it can be difficult to monitor and control all the activity in and out of cloud storage repositories, making it easy for malware to hide in the noise as it makes its way to the cloud. That’s where cloud storage scanning comes in.

Cloud storage scanning is exactly what it sounds like: it’s a way to scan for malware in cloud storage apps like Box, Google Drive, and OneDrive. And while most cloud storage apps have malware-scanning capabilities, it’s important to have a second-opinion scanner as well.

A second-opinion cloud storage scanner is a great second line of defense for cloud storage because it’s very possible that your main scanner will fail to detect a cloud-based malware infection that your second-opinion one catches.

4. Have a data backup strategy in place

The worst case scenario: You’ve properly configured your cloud, secured all your endpoints, and regularly scan your cloud storage — yet cloud-based malware still manages to slip past your defenses and encrypt all your files

You should have a data backup strategy in place for exactly this kind of ransomware scenario. 

When it comes to ransomware attacks in the cloud — which can cause businesses to lose critical or sensitive data — a data backup strategy is your best chance at recovering the lost files.

There are several important things to consider when implementing a data backup strategy, according to Cybersecurity and Infrastructure Security Agency (CISA) recommendations. In particular, CISA recommends using the 3-2-1 strategy. 

The 3-2-1 strategy means that, for every file, keep:

  • One on a workstation, stored locally for editing or on a local server, for ease of access.
  • One stored on a cloud backup solution.
  • One stored on a long-term storage such as a drive array, replicated offsite, or even an old school tape drive.

Prevent cloud-based malware from getting a hold on your organization

Cloud-based malware is one of many threats to cloud security that businesses should address, and since cloud providers operate under a shared responsibility model, you need to have a game plan ready in the case of a cloud-based malware attack. In this article, we outlined how malware can enter the cloud and four things you can do to better secure your business against it. 

Interested in reading about real-life examples of cloud-based malware? Read the case study of how a business used Malwarebytes to help eliminate cloud-based threats. 

The post Cloud-based malware is on the rise. How can you secure your business? appeared first on Malwarebytes Labs.

TikTok is “unacceptable security risk” and should be removed from app stores, says FCC

Brendan Carr, the commissioner of the FCC (Federal Communications Commission), called on the CEOs of Apple and Google to remove TikTok from their app stores. In a letter dated June 24, 2022, Carr told Tim Cook and Sundar Pichai that “TikTok poses an unacceptable national security risk due to its extensive data harvesting being combined with Beijing’s apparently unchecked access to that sensitive data.”

Carr also said:

But it is also clear that TikTok’s pattern of conduct and misrepresentations regarding the unfettered access that persons in Beijing have sensitive US user data … puts it out of compliance with the policies that both of your companies require every app to adhere to as a condition of remaining available on your app stores.

Therefore, I am requesting that you apply the plain text of your app store policies to TikTok and remove it from your app stores for failure to abide by those terms.

In the Twitter thread, Carr pointed out the national security risks TikTok poses.

Excessive data collection

TikTok is said to collect “everything”, from search and browsing histories; keystroke patterns; biometric identifiers—including faceprints, something that might be used in “unrelated facial recognition technology”, and voiceprints—location data; draft messages; metadata; and data stored on the clipboard, including text, images, and videos.

Carr cited several incidents as evidence that TikTok has been dodgy about its data collection practices.

Relation to the CCP (Communist Party of China)

ByteDance, a company based in Beijing, developed TikTok. In China, it is known as Douyin. Carr mentioned in his letter to Apple and Google that ByteDance “is beholden to the Communist Party of China and required by Chinese law to comply with the PRC‘s surveillance demands.”

The Senate and House committee members, cybersecurity researchers, privacy, and civil rights groups have flagged this as a concern. In 2019, two senators labeled TikTok as a “potential counterintelligence threat we cannot ignore”. The American Civil Liberties Union (ACLU) is also concerned about the social platform’s “vague” policies, especially in collecting and using biometric data.

Unclear use of collected data

It’s a non-issue for apps that are clear about collecting data, but these must also say how they use the data they collect. TikTok, it appears, is not one of those apps that do not abide by this clause.

“Numerous provisions of the Apple App Store and Google Play Store policies are relevant to TikTok’s pattern of surreptitious data practices—a pattern that runs contrary to its repeated representations,” the letter reads.

“For instance, Section 5.1.2(i) of the Apple App Store Review Guidelines states that an app developer ‘must provide access to information about how and where the data [of an individual will be used’ and ‘[d]ata collected from apps may only be shared with third parties to improve the app or serve advertising.”

Is TikTok a “sophisticated surveillance tool”?

TikTok didn’t sit on its hands when news spread of the FCC calling for its removal from major app stores.

Speaking with CNN’s “Reliable Sources”, Michael Beckerman, VP, Head of Public Policy, Americas at TikTok, refuted a large chunk of the FCC’s claims against the social media company, predicated on the notion that Carr is isn’t an expert on such issues and that FCC doesn’t have jurisdiction over national security.

“He’s pointing out a number of areas that are simply false in terms of information that we’re collecting, and we’re happy to set the record straight,” Beckerman said.

When asked about the inaccuracies in Carr’s claims, Beckerman responded: “He’s mentioning we’re collecting browser history, like we’re tracking you across the internet. That’s simply false. It is something that a number of social media apps do without checking your browser history across other apps. That is not what TikTok does.”

“He’s talking about faceprints—that is not something we collect,” he said, explaining that the technology in their app is not for identifying individuals but for the purpose of filters, such as knowing when to put glasses or a hat on a face/head.

Concerning keystroke patterns, Beckerman said, “It’s not logging what you’re typing. It’s an anti-fraud measure that checks the rhythm of the way people are typing to ensure it’s not a bot or some other malicious activity.”

When challenged if the CCP has seen any non-public user data, he said, “We have never shared information with the Chinese government nor would we […] We have US-based security teams that manage access, manage the app, and, as actual national security agencies like the CIA during the Trump administration pointed out, the data that’s available on TikTok—because it’s an entertainment app—is not of a national security importance.”

Politicians and privacy advocates have criticized TikTok for potentially exposing US user data to China for years. To allay fears, TikTok teamed up with Oracle and began routing data of its American users to US-based servers.

This, however, doesn’t answer some questions raised when Buzzfeed News broke the story about TikTok employees in China “repeatedly” accessing US user data for at least several months. Such incidents reportedly occurred from September 2021 to January 2022, months before the Oracle data rerouting.

There is also an allegation that a member of TikTok’s trust and safety department said in a meeting that “Everything is seen in China”. A director in another meeting allegedly claimed that a colleague in China is a “Master Admin” who “has access to everything.”

“We want to be trusted,” Beckerman said during the CNN interview. “There’s obviously a lack of trust across the Internet right now, and for us, we’re aiming for the highest, trying to be one of the most trusted apps, and we’re answering questions and being as transparent as we can be.”

The post TikTok is “unacceptable security risk” and should be removed from app stores, says FCC appeared first on Malwarebytes Labs.

Update now! Chrome patches ANOTHER zero-day vulnerability

Google has released version 103.0.5060.114 for Chrome, now available in the Stable Desktop channel worldwide. The main goal of this new version is to patch CVE-2022-2294.

CVE-2022-2294  is a high severity heap-based buffer overflow weakness in the Web Real-Time Communications (WebRTC) component which is being exploited in the wild. This is the fourth Chrome zero-day to be patched in 2022.

Heap buffer overflow

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services).

A buffer overflow is a type of software vulnerability that exists when an area of memory within a software application reaches its address boundary and writes into an adjacent memory region. In software exploit code, two common areas that are targeted for overflows are the stack and the heap.

The heap is an area of memory made available use by the program. The program can request blocks of memory for its use within the heap. In order to allocate a block of some size, the program makes an explicit request by calling the heap allocation operation.

The vulnerability

WebRTC on Chrome is the first true in-browser solution to real-time communications (RTC). It supports video, voice, and generic data to be sent between peers, allowing developers to build powerful voice- and video-communication solutions. The technology is available on all modern browsers as well as on native clients for all major platforms.

A WebRTC application will usually go through a common application flow. Access the media devices, open peer connections, discover peers, and start streaming. Since Google does not disclose details about the vulnerability until everyone has had ample opportunity to install the fix it is unclear in what stage the vulnerability exists.

How to protect yourself

If you’re a Chrome user on Windows or Mac, you should update as soon as possible.

The easiest way to update Chrome is to allow it to update automatically, which basically uses the same method as outlined below but does not require your attention. But you can end up lagging behind if you never close the browser or if something goes wrong, such as an extension stopping you from updating the browser.

So, it doesn’t hurt to check now and then. And now would be a good time, given the severity of the vulnerability. My preferred method is to have Chrome open the page chrome://settings/help which you can also find by clicking Settings > About Chrome.

If there is an update available, Chrome will notify you and start downloading it. Then all you have to do is relaunch the browser in order for the update to complete.

updating Chrome

After the update the version should be 103.0.5060.114 or later.

Chrome is up to date

Since WebRTC is a Chromium component, users of other Chromium based browsers may see a similar update.

Stay safe, everyone!

The post Update now! Chrome patches ANOTHER zero-day vulnerability appeared first on Malwarebytes Labs.